That, but getting your army from mostly melee to mostly range and solving your operational problems helps a lot too.
Dirichlet-to-Neumann
I mean that AGI and autonomous cars are orthogonal problems, especially because autonomous car requires solving engineering issues (which have been discussed by other commentators) which are different from the software issues. It’s quite usual here on less wrong to handwave the engineering away once the theoretical problem is solved.
Some animals species are able to adopt contraception-like practices too. For example birds of preys typically let some of their offsprings die of hunger when preys are space.
Note, though, that agrarian Eurasians empires ended up winning their 1000 thousand years struggle against the steppe peoples.
No, because the events which led to the unification of the Mongol tribes by Gengis Khan were highly contingent.
However, the military power overhang of the steppe peoples vs agrarian states should have been obvious for anyone since both the Huns and the Turks did the same thing centuries before.
As usual you fall onto the trap of neglecting the engineering and social organisation problems and the time required to solve them. We don’t need AGI for autonomous car, it will just take time.
High levels GPUs are needed for basically anything mundane today. No need to bring in AGI worries to make it a strategic ressource.
Yet human routinely sacrifice their own lives for the good of other (see : firefighters, soldiers, high mountain emergency rescuers, etc.). The X-risk argument is more abstract but basically the same.
Less wrong has imo a consistent bias toward thinking only ideas/theory are important and that the dirty (and lengthy) work of actual engineering will just sort itself out.
For a community that prides itself on empirical evidence it’s rather ironic.
I’d be happy to play any of the A, B and C roles.
I’m a around 1850 elo FIDE, about 2000-2100 on lichess. I play a couple of blitz games daily.
I’d be willing to play at almost any cadence and have a lot of free time. I actually live in France, so a one-move-per-day game with someone living in the US would probably be ideal. Live sessions can be programmed from 16 GMT to 23 GMT on weekdays, and from 7 GMT to 23 GMT on weekends.
As I said I would be happy to play any role. I think it would be more interesting if the lower player is actually not a total beginner—total beginners are probably not hard to deceive. A decent club player with advisors about 300-500 elo higher would be best imo. And if we can experiment at many different elo levels, even better.
Registering a prediction: assuming the elo difference stay constant, better players will be much more difficult to deceive. And a GM would consistently pick up who is lying if you could rope up Caruana, Carlsen and Ding to do the experiment.
Well the Good Samaritan parable is the most well known, most important and most striking parable in the Gospels on the very specific topic of who you should help and how you should help. It’s not a wonder it’s a recurring name for Christian inspired charities.
DeepL is generally better than Google translate anyway.
Most western polytheistic religion (Roman, Greek...). Judaism*. Islam*. Buddhism. In fact Christianism with its overemphasized focus on dogmas is somewhat an exception.
I’m not saying those religions don’t include beliefs but that they are not defined by those beliefs.
I would find the shiny look rather annoying, wouldn’t a matte finish be better ?
I kind of feel like the moment is not ideal for a huge development project in collaboration between Finland, Russia, the US and Canada.
Genuine question : how much your opinion on college and higher education are due to the American system being insane ?
Because for example in France university/college is mostly free. Nobody get into debt to pay for tuition. How much would it change your opinion?
Concerning MAID, if past trend are to be believed, the most terrifying thing is that these numbers will only get worse.
To be honest I’m just as afraid of aligned AGI as of unaligned AGI. An AGI aligned with the values of the PRC seems like a nightmare. If it’s aligned with the US army it’s only really bad, and Yudkowsky dath illan is not exactly the world I want to live in either...
1 000 mg is the standard dose in France, with 500mg being used almost only for children.
I think it’s weird that such prominent figures of the rationalist scene have such grim visions of the future. Where did technoptimism go ? They look more like Catholic conservatives...