Well, so far no such higher power seems forthcoming, and totalizing ideologies grip public imagination as surely as ever, so the need for liberalism-or-something-better is still live, for those not especially into wars.
xpym
Of course liberalism has struggles, the whole point of it is that it’s the best currently known way to deal with competing interests and value differences short of war. This invites three possible categories of objection: that there is actually a better way, that there is no better way and liberalism also no longer works, or that wars are actually a desirable method of conflict resolution. From what I can tell, yours seem to fall into the second and/or third category, but I’m interested in whether you have anything in the first one.
I don’t see a substantial difference between a (good enough) experience machine and an ‘aligned’ superintelligent Bostromian singleton, so the apparent opposition to the former combined with the enthusiastic support for the latter from the archetypal transhumanist always confused me.
That is, turns itself into a God, while also keeping its heart intact? Well, you can do that too (right?).
Likely wrong. Human heart is a loose amalgamation of heuristics adapted to deal with its immediate surroundings, and couldn’t survive ascension to godhood intact. As usual, Scott put it best (the Bay Area transit system analogy), but unfortunately stuck it in the end of a mostly-unrelated post, so it’s undeservedly obscure.
David Chapman has been banging on for years now against “Bayesianism”/early LW-style rationality being particularly useful for novel scientific advances, and, separately, against utilitarianism being a satisfactory all-purpose system of ethics. He proposes another “royal road”, something something Kegan stage 5 (and maybe also Buddhism for some reason), but, frustratingly, his writings so far are rich on expositions and problem statements but consist of many IOUs on detailed solution approaches. I think that he makes a compelling case that these are open problems, insufficiently acknowledged and grappled with even by non-mainstream communities like the LW-sphere, but is probably overconfident about postmodernism/himself having much useful to offer in the way of answers.
I’d say that, on conflict theory terms, NYT adequately described Scott. They correctly identified him as a contrarian willing to entertain, and maybe even hold, taboo opinions, and to have polite interactions with out-and-out witches. Of course, we may think it deplorable that the ‘newspaper of record’ considers such people deserving to be publicly named and shamed, but they provided reasonably accurate information to those sharing this point of view.
Maybe I’m missing some context, but wouldn’t it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor “aligned with humanity” (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).
This seems to presuppose that there is a strong causal effect from OpenAI’s destruction to avoiding creation of an omnicidal AGI, which doesn’t seem likely? The real question is whether OpenAI was, on the margin, a worse front-runner than its closest competitors, which is plausible, but then the board should have made that case loudly and clearly, because, entirely predictably, their silence has just made the situation worse.
To me the core reason for wide disagreement seems simple enough—at this stage the essential nature of AI existential risk arguments is not scientific but philosophical. The terms are informal and there are no grounded models of underlying dynamics (in contrast with e.g. climate change). Large persistent philosophical disagreements are very much the widespread norm, and thus unsurprising in this particular instance as well, even among experts in currently existing AIs, as it’s far from clear how their insights would extrapolate to hypothetical future systems.
Here’s Chapman’s characterization of LW:
Among the (arguably) core LW beliefs that he has criticized over the years are Bayesianism as a complete approach to epistemology, utilitarianism as a workable approach to ethics, the map/territory metaphor as a particularly apt way to think about the relationship between belief and reality.