They can’t do that since it would make it obvious to the target that they should counter-attack.
Brendan Long
As an update: Too much psyllium makes me feel uncomfortably full, so I imagine that’s part of the weight loss effect of 5 grams of it per meal. I did some experimentation but ended up sticking with 1 gram per meal or snack, in 500 gram capsules and taken with water.
I carry 8 of these pills (enough for 4 meals/snacks) in my pocket in small flat pill organizers.
It’s still too early to assess the impact on cholesterol but this helps with my digestive issues, and it seems to help me not overeat delicious foods to the same extent (i.e. on a day where I previously would have eaten 4 slices of pizza for lunch, I find it easy to eat 2 slices + psyllium instead).
Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.
I personally think it’s good for us to protect friendly countries like this, but isn’t China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?
You also mention Trump’s anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).
I think it’s important that AIs will be created within an existing system of law and property rights. Unlike animals, they’ll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.
I think you disagree with Eliezer on a different crux (whether the alignment problem is easy). If we could create AI’s that follows the existing system of law and property rights (including the intent of the laws, and doesn’t exploit loopholes, and doesn’t maliciously comply with laws, and doesn’t try to get the law changed, etc.) then that would be a solution to the alignment problem, but the problem is that we don’t know how to do that.
I think trying to be Superman is the problem, but I’m ok if that line of thinking doesn’t work for you.
Do you mean in the sense that people who aren’t Superman should stop beating themselves up about it (a real problem in EA), or that even if you are (financial) Superman, born in the red-white-and-blue light of a distant star, you shouldn’t save people in other countries because that’s bad somehow?
The argument using Bernard Arnault doesn’t really work. He (probably) won’t give you $77 because if he gave everyone $77, he’d spend a very large portion of his wealth. But we don’t need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.
(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don’t think this particular argument in the specific way it was written in this post works)
I’m only vaguely connected to EA in the sense of donating more-than-usual amounts of money in effective ways (❤️ GiveDirectly), but this feels like a strawman. I don’t think the average EA would recommend charities that hurt other people as side effects, work actively-harmful jobs to make money[1], or generally Utilitarian-maxxing.
The EA trolley problem is that there are thousands (or millions) of trolleys that have varying difficult of stopping, barreling toward varying groups of people. The problem isn’t that stopping them hurts other people (it doesn’t), it’s just that you can’t stop them all. You don’t need to be a utilitarian to think that if it’s raining planes, Superman should start by catching the 747′s.
- ^
For example, high-paying finance jobs are high-stress and many people don’t like working them, but they’re not actually bad for the world.
- ^
One listed idea was that you can buy reservations at one website directly from the restaurant, with the price going as a downpayment. The example given was $1,000 for a table for two at Carbone, with others being somewhat less. As is pointed out, that fixes the incentives for booking, but once you show up you are now in all-you-can-eat mode at a place not designed for that.
I’ve been to several restaurants that do some form of this, from a small booking fee that gets refunded when you check in, to just paying entirely up-front (for restaurants with pre-set menus).
This is built into OpenTable so it’s not even that hard. I’m really confused why more restaurants don’t do this.
I’m not a video creator, but I wonder if this could be turned into a useful tool that takes the stills from a video and predicts which ones will get the highest engagement.
Also if anyone’s interested in the other meetups I mentioned, there’s:
The Millenial Social Club meetup group plays board games every Friday in the food court in Lincoln Tower South in Bellevue. The group is always huge (30+ people). It looks like they started doing it on Sundays recently too. https://meetu.ps/e/Ns6hh/blHm6/i
There’s a Seattle Rationalists reading group that meets on Mondays in Seattle. https://meetu.ps/e/NrycV/blHm6/I
Seattle Effective Altruists occasionally has social meetups in Redmond but I don’t know when the next will be: https://meetu.ps/e/Ns3Gt/blHm6/I
If anyone finds any other social rationalist-adjacent meetups on the east side I’d love to know, since I’m not really into book clubs and getting into Seattle is too hard after work.
In case anyone’s wondering, the lights I talked about were these:
I have 8 of the 4 ft 5000K version (they’re cheaper in 4-packs). I have them plugged into a switched outlet and daisy-chained together, and they’re attached at the top of the wall to make it look like light is coming down from all around. They’re tedious to setup by worth it in my opinion.
I like the 5000k version but some people might like warmer light like 4000k (or 6500k if you really like blue). https://www.waveformlighting.com/home-residential/which-led-light-color-temperature-should-i-choose
There’s probably cheaper similarly-good lights available but Waveform’s marketing materials worked on me: https://www.waveformlighting.com/high-cri-led
I have a severely ‘unbalanced’ portfolio of assets for this reason, and even getting rid of the step-up on death would not change that in many cases.
What would be the point of not realizing gains indefinitely if we got rid of the step-on on death?
I don’t enjoy PT or exercise, but mostly because it’s boring / feels like a waste of time. My peanut butter is to do that involve exercise but where the purpose isn’t strictly exercise or where I get some other benefit:
Biking to work every day takes me about the same amount of time as driving and is more fun. Hills weren’t fun so I got an e-bike and with sufficient assist they became fun again. As I get more in shape, I find myself turning the assist down because I don’t really need it.
Biking to restaurants and bars is also fun.
I like going on walks with friends and talking, so why not do that while walking up a mountain?
I joined a casual dodgeball league for fun and meeting people, and as a side effect do the cardio equivalent of two hours of jogging every Sunday.
Indoor rock climbing feels a little bit like exercise, but it’s also a group activity that involves a lot of downtime just talking.
(I’ve yet to find a good way to mix my also-shoulder PT into anything fun, so I just keep exercise bands at my desk at work)
It would be expensive, but it’s not a hard constraint. OpenAI could almost certainly raise another $600M per year if they wanted to (they’re allegedly already losing $5B per year now).
Also the post only suggests this pay structure for a subset of employees.
A century ago, it was predicted that by now, people would be working under 20 hours a week.
And this prediction was basically correct, but missed the fact that it’s more efficient to work 30-40 hours per week while working and then take weeks or decades off when not working.
The extra time has gone to more leisure, less child labor, more schooling, and earlier retirement (plus support for people who can’t work at all).
The Overpopulation FAQs is about overpopulation, not necessarily water scarcity. Water scarcity can contribute to overpopulation, but it is only one of multiple potential causes.
My point is that when LessWrongers see not enough water for a given population, we try to fix the water not the people.
I wrote that EA is mostly misguided because it makes faulty assumptions. And to the contrary, I did praise a few things about EA.
Yes, I read your argument that preventing people from dying of starvation and/or disease is bad:
In some ways, the justification for EA assumes a fallacy of composition since EA believes that people can and should help everyone. [...] To the contrary, I’d argue that a lot of charities that supposedly have the greatest amount of “good” for humanity would contribute to overpopulation, which would negate their benefits in the long run. For example, programs to prevent malaria, provide clean water, and feed starving families in Sub-Saharan Africa would hasten the Earth’s likelihood of becoming overpopulated and exacerbate dysgenics.
So yes, maybe this is my cult programming, but I would rather we do the hard work of supporting a higher population (solar panels, desalination, etc.) than let people starve to death.
I’m partially downvoting this for the standard reason that I want to read actual interesting posts and not posts about “Why doesn’t LessWrong like my content? Aren’t you a cult if you don’t agree with me?”.
But I’m also downvoting because I specifically think it’s good that LessWrong doesn’t have a bunch of posts about how we’re going to run out of water(?!) if we don’t forcibly sterilize people, or that EA is bad because altruism is bad. Sorry, I just can’t escape my cult programming here. Helping people is Good Actually and I’d rather solve resource shortages by making more.
This is an interesting idea, but I found these images and descriptions confusing and not really helpful.
It would be very surprising to me if such ambitious people wanted to leave right before they had a chance to make history though.