Just a reminder that this site is not a 24-hour news network, or at least wasn’t until recently.
Shmi
Why Q*, if real, might be a game changer
“Capitalism” or even “late-stage capitalism” is currently a pure negative connotation terms among the progressives in the West, with no denotation left. Your definition is also non-central, compared to the original and more standard “Capitalism is an economic system based on the private ownership of the means of production and their operation for profit.” This, incidentally, includes social democracy.
If the idea of eternal inflation and nucleating baby universes matches reality (a big if), or can be made to match reality (who knows, maybe with enough power we can affect the inflaton field), then potentially this could avoid the heat death of at least some universes.
This is all pure speculation, of course.
Further reading: https://www.preposterousuniverse.com/blog/2011/10/21/the-eternally-existing-self-reproducing-frequently-puzzling-inflationary-universe/
Huh, I never heard of this umbrella Effective Ventures Foundation before. Let alone about its ability to muzzle individual speech.
Well, I have a privileged position of being able to derive it from the first principles, so it is “true” given certain rather mild assumptions about the way the universe works, which stem from some observations (speed of light is constant, observations leading to the Maxwell equations, etc.) leading to the relativistic free particle Lagrangian, and confirmed by others (e.g. atmospheric cosmic ray muon decay). So this is not an isolated belief, but more like an essential part of the model of the world. Without it the whole ontology falls apart. And so does epistemology.
Given that there is no known physical theory that allows deliberate time travel (rather than being stuck in a loop forever to begin with), I am confused as to how you can estimate the cost of it.
I am all for squelching terrorism, but this site is probably not the right venue to discuss ways of killing people.
A more realistic and rational outcome: Alice is indeed an ass and it’s not fun to be around her. Bob walks out and blocks her everywhere. Now, dutchbook this!
It’s a good start, but I don’t think this is a reasonably exhaustive list, since I don’t find myself on it :)
My position is closest to your number 3: “ASI will not want to take over or destroy the world.” Mostly because “want” is a very anthropomorphic concept. The Orthogonality Thesis is not false, but inapplicable, since AI are so different from humans. They did not evolve to survive, they were designed to answer questions.
It will be possible to coordinate to prevent any AI from being given deliberately dangerous instructions, and also any unintended consequences will not be that much of a problem
I do not think it will be possible, and I expect some serious calamities from people intentionally or accidentally giving an AI “deliberately dangerous instructions”. I just wouldn’t expect it to result in systematic extermination of all life on earth, since the AI itself does not care in the same way humans do. Sure, it’s a dangerous tool to wield, but it is not a malevolent one. Sort of 3-b-iv, but not quite.
But mostly the issue with doomerism I see is that the Knightian uncertainty on any non-trivial time frame: there will be black swans in all directions, just like there have been lately (for example, no one expected near-human-level LARPing that LLMs do, while not being in any way close to a sentient agent).
To be clear, I expect the world to change quickly and maybe even unrecognizably in the next decade or two, with lots of catastrophic calamities, but the odds of complete “destruction of all value”, the way Zvi puts it, cannot be evaluated at this point with any confidence. The only way to get this confidence is to walk the walk. Pausing and being careful and deliberate about each step does not seem to make sense, at least not yet.
Yeah, that looks like a bizarre claim. I do not think there is any reason whatsoever to doubt yours or Ben’s integrity.
That looks and reads… very corporate.
When has this become a “gossip about the outgroup” site?
Thought I’d comment in brief. I very much enjoyed your post and I think it is mostly right on point. I agree that EA does not have a great epistemic hygiene, given what their aspirations are, and the veganism discussion is a case in point. (Other issues related to EA and CEA have been brought up lately in various posts, and are not worth rehashing here.)
As far as the quoted exchange with me, I agree that I have not stated a proper disclaimer, which was quite warranted, given the thrust of the post. My only intended point was that, while a lot of people do veganism wrong and some are not suited to it at all, an average person can be vegan without adverse health effects, as long as they eat varied and enriched plant-based diet and periodically check their vitamins/nutrients/minerals levels and make dietary adjustments as necessary. Some might find out that they are in the small minority for whom vegan diet is not feasible, and they would do well to focus on what works for them and contribute to EA in other ways. Again, I’m sorry this seems to have come across wrong.
Oh, and cat veganism is basically animal torture, those who want to wean cats off farmed animal food should focus on vat-grown meat for pet food etc.
Sure, it’s not necessary that a sufficiently advance AI has to work like the brain, but there has to be an intuition about why is not need it to at least create an utility maximizer.
Octopus’ brain(s) is nothing like that of mammals, and yet it is equally intelligent.
“Sanity” may not be a useful concept in edge cases, but yes, being able to trust your mind to autopilot is definitely within the central definition of sanity, it’s a good observation.
You may also be interested in Scott’s post series on the topic, the latest being https://www.astralcodexten.com/p/contra-kirkegaard-on-evolutionary
FTFY: “Smile at strangers iff it has non-negative EV, because smiling is cheap and sometimes it does”.
“I am going to read you mind and if you believe in a decision theory that one-boxes in Newcomb’s Paradox I will leave you alone, but if you believe in any other decision theory I will kick you in the dick”
Sure, that’s possible. Assuming there are no Newcomb’s predictors in that universe, but only DK, rational agents believe in two-boxing. I am lost as to how it is related to your original point.
Let me clarify what I said. Any decision theory or no decision theory at all that results in someone one-boxing is rewarded. Examples: Someone hates touching transparent boxes. Someone likes a mystery of an opaque box. Someone thinking that they don’t deserve a guaranteed payout and hoping for an empty box. Someone who is a gambler. Etc. What matters is the outcome, not the thought process.
Nothing can be “ruled out” 100%, but a lot would have to change for FTL travel to be possible. One thing that would have to go is Lorentz invariance. Which means all of current fundamental physics, including the standard model of Particle Physics, and the Standard model of Cosmology would have to be broken. While this is not out of the question at very high energies, much higher than what has been achieved in particle accelerators, or in any observed natural processes, it is certainly incompatible with anything we observed so far. There are plenty of open problems in fundamental physics, but it is not likely they would be resolved without understanding what happens at very high energies, far beyond those created in the heart of the supernovae explosions.