Yeah, but assuming your p(doom) isn’t really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don’t expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life—one that is better than it would be otherwise in part because she never has a job.
Timothy Underwood
Thinksgiving Meeting
Budapest Hungary—ACX Meetups Everywhere Fall 2024
Budapest – ACX Meetups Everywhere Spring 2024
I’d note that acoup’s model of fires primacy making defence untenable between hi tech nations, while not completely disproven by the Ukraine war, is a hypothesis that seems much less likely to be true/ less true than it did in early 2022. The Ukraine war has shown in most cases a strong advantage to a prepared defender and the difficulty of taking urban environments.
The current Israel—Hamas was shows a similar tendency, where Israel is moving very slowly into the core urban concentrations (ie it has surrounded Gaza city so far, but not really entered it), though its superiority in resources relative to its opponent is vastly greater than Russia’s advantage over Ukraine was.
I’d expect per capita war deaths to have nothing to do with offence/ defence balance as such (unless the defence gets so strong that wars simply don’t happen, in which case it goes to zero).
Per capita war deaths in this context are about the ability of states to mobilize populations, and about how much damage the warfare does to the civilian population that the battle occurs over. I don’t think there is any uncomplicated connection between that and something like ‘how much bigger does your army need to be for you to be able to successfully win against a defender who has had time to get ready’.
This matches my sense of how a lot of people seem to have… noticed that GPT-4 is fairly well aligned to what the OpenAI team wants it to be, in ways that Yudkowsky et al said would be very hard, and still not view this as at a minimum a positive sign?
Ie problems of the class ‘I told the intelligence to get my mother out of the burning building and it blew her up so the dead body flew out the window, this is because I wasn’t actually specific enough’ just don’t seem like they are a major worry anymore?Usually when GPT-4 doesn’t understand what I’m asking, I wouldn’t be surprised if a human was confused also.
Thinksgiving Meetup: Sunday, Nov 19
ACX/LW Meetup: Sunday October 22, 2 pm at Tim’s
Budapest, Hungary – ACX Meetups Everywhere Fall 2023
Weirdly, and I think this is because my childhood definitely was not optimized for getting into a good university (I was homeschooled, and ended up transferring to Berkley based off perfect grades for two years in a community college), but reading the last paragraphs here made me rather nostolgic for the two or three weeks I spent doing practice SAT tests.
I mean, it kind of does fine at arithmetic?
I just gave gpt3.5 three random x plus y questions, and it managed one that I didn’t want to bother doing in my head.
I think the issue is that creating an incentive system where people are rewarded for being good at an artificial game that has very little connection to their real world cericumstances, isn’t going to tell us anything very interesting about how rational people are in the real world, under their real constraints.
I have a friend who for a while was very enthused about calibration training, and at one point he even got a group of us from the local meetup + phil hazeldon to do a group exercise using a program he wrote to score our calibration on numeric questions drawn from wikipedia. The thing is that while I learned from this to be way less confident about my guesses—which improves rationality, it is actually, for the reasons specified, useless to create 90% confidence intervals about making important real world decisions.
Should I try training for a new career? The true 90% confidence interval on any difficult to pursue idea that I am seriously considering almost certainly includes ‘you won’t succeed, and the time you spend will be a complete waste’ and ‘you’ll do really well, and it will seem like an awesome decision in retrospect’.
Saturday June 24, meeting at Museum Kert
If you think P(doom) is 1, you probably don’t believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.
You don’t become generally viewed by society as a defector when you file a lawsuit. Private violence defines you in that way, and thus marks you as an enemy of ethical cooperators, which is unlikely to be a good long term strategy.
Yeah, but I read somewhere that loneliness kills. So actually risking being murdered by grass is safer, because you’ll be less lonely.
I think we agree though.
Making decisions based on tiny probabilities is generally a bad approach. Also, there is no option that is actually safe.
You are right that I have no idea about whether near complete isolation has a higher life expectancy than being normally social, and the claim needed to compare them to make logical sense in that way.
I think the claim does still make sense if interpreted as ‘whether it is positive or negative on net, deciding to be completely isolated has way bigger consequences, even in terms of direct mortality risk, than taking the covid vaccine’ - and thus avoiding the vaccine should not be seen as a major advantage of being isolated.
Budapest, Hungary – ACX Meetups Everywhere Spring 2023
My experience is that it is like having extra in laws, who you may or may not like, but have to sort of get along with occasionally.
I don’t think most people actually talk very much with their in laws, or assume that people who the in law dislikes should be disliked.
You might capture value out of that relative to broad equities if the world ends up both severely deflationary due to falling costs, and where current publicly traded companies are mostly unable to compete in the new context.