I would be happy to take bets here about what people would say.
Sure, I DM’d you.
I would be happy to take bets here about what people would say.
Sure, I DM’d you.
I think making inferences from that to modern MIRI is about as confused as making inferences from people’s high-school essays about what they will do when they become president
Yeah, but it’s not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that “a small group of people with overwhelming hard power” was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.
I think they talked explicitly about planning to deploy the AI themselves back in the early days(2004-ish) then gradually transitioned to talking generally about what someone with a powerful AI could do.
But I strongly suspect that in the event that they were the first to obtain powerful AI, they would deploy it themselves or perhaps give it to handpicked successors. Given Eliezer’s worldview I don’t think it would make much sense for them to give the AI to the US government(considered incompetent) or AI labs(negligently reckless)
It wasn’t specified but I think they strongly implied it would be that or something equivalently coercive. The “melting GPUs” plan was explicitly not a pivotal act but rather something with the required level of difficulty, and it was implied that the actual pivotal act would be something further outside the political Overton window. When you consider the ways “melting GPUs” would be insufficient a plan like this is the natural conclusion.
doesn’t require replacing existing governments
I don’t think you would need to replace existing governments. Just block all AI projects and maintain your ability to continue doing so in the future via maintaining military supremacy. Get existing governments to help you, or at least not interfere, via some mix of coercion and trade. Sort of a feudal arrangement with a minimalist central power.
“Taking over” something does not imply that you are going to use your authority in a tyrannical fashion. People can obtain control over organizations and places and govern with a light or even barely-existent touch, it happens all the time.
Would you accept “they plan to use extremely powerful AI to institute a minimalist, AI-enabled world government focused on preventing the development of other AI systems” as a summary? Like sure, “they want to take over the world” as a gist of that does have a bit of an editorial slant, but not that much of one. I think that my original comment would be perceived as much less misleading by the majority of the world’s population than “they just want to do some helpful math uwu” in the event that these plans actually succeeded. I also think it’s obvious that these plans indicate a far higher degree of power-seeking(in aim at least) than virtually all other charitable organizations.
(..and to reiterate, I’m not taking a strong stance on the advisability of these plans. In a way, had they succeeded, that would have provided a strong justification for their necessity. I just think it’s absurd to say that the organization making them is less power-seeking than the ADL or whatever)
Are you saying that AIS movement is more power-seeking than environmentalist movement that spent 30M$+[...]
I think that AIS lobbying is likely to have more consequential and enduring effects on the world than environmental lobbying regardless of the absolute size in body count or amount of money, so yes.
“MIRI default plan” was “to do math in hope that some of this math will turn out to be useful”.
I mean yeah, that is a better description of their publicly-known day-to-day actions, but intention also matters. They settled on math after it became clear that the god AI plan was not achievable(and recently, gave up on the math plan too when it became clear that was not realistic). An analogy might be an environmental group that planned to end pollution by bio-engineering a microbe to spread throughout the world that made oil production impossible, then reluctantly settled for lobbying once they realized they couldn’t actually make the microbe. I think this would be a pretty unusually power-seeking plan for an environmental group!
Are you sure [...] et cetera are less power-seeking than AI Safety community?
Until recently the MIRI default plan was basically “obtain god-like AI and use it to take over the world”(“pivotal act”), it’s hard to get more power-seeking than that. Other wings of the community have been more circumspect but also more active in things like founding AI labs, influencing government policy, etc., to the tune of many billions of dollars worth of total influence. Not saying this is necessarily wrong but it does seem empirically clear that AI-risk-avoiders are more power-seeking than most movements.
let’s ensure that AGI corporations have management that is not completely blind to alignment problem
Seems like this is already the case.
Seconded.
Makes sense. But I think the OP is using the term to mean something different than you(centrally math and puzzle solving)
Hmm, but don’t puzzle games and math fit those criteria pretty well?(I guess if you’re really trying hard at either there’s more legitimate contact with reality?) What would you consider a central example of a nerdy interest?
I wonder if “brains” of the sort that are useful for math and programming are neccessarily all that helpful here. I think intuition-guided trial and error might work better. That’s been my experience dealing with chronic-illness type stuff.
I think she meant he was looking for epistemic authority figures to defer to more broadly, even if it wasn’t because he thought they were better at math than him.
Some advanced meditators report that they do perceive experience as being basically discrete, flickering in and out of existence at a very high frequency(which is why it might appear continuous without sufficient attention). See e.g. https://www.mctb.org/mctb2/table-of-contents/part-i-the-fundamentals/5-the-three-characteristics/
Tangentially related: some advanced meditators report that their sense that perception has a center vanishes at a certain point along the meditative path, and this is associated with a reduction in suffering.
performance gap of trans women over women
The post is about the performance gap of trans women over men, not women.
I don’t know enough about hormonal biology to guess a specific cause(some general factor of neoteny, perhaps??). It’s much easier to infer that it’s likely some third factor than to know exactly what third factor it is. I actually think most of the evidence in this very post supports the 3rd-factor position or is equivocal—testosterone acting as a nootropic is very weird if it makes you dumber, that men and women have equal IQs seems not to be true, the study cited to support a U-shaped relationship seems flimsy, that most of the ostensible damage occurs before adulthood seems in tension with your smarter friends transitioning after high school.
I buy that trans women are smart but I doubt “testosterone makes you dumber” is the explanation, more likely some 3rd factor raises IQ and lowers testosterone.
I think using the universal prior again is more natural. It’s simpler to use the same complexity metric for everything; it’s more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn’t hold.
Nice overview, I agree but I think the 2016-2021 plan could still arguably be described as “obtain god-like AI and use it to take over the world”(admittedly with some rhetorical exaggeration, but like, not that much)