Basically, a “wait a decade quietly” strategy
I was thinking more like “ten weeks”. That’s a long time for an AGI to place its clone-agents and prepare a strike.
Basically, a “wait a decade quietly” strategy
I was thinking more like “ten weeks”. That’s a long time for an AGI to place its clone-agents and prepare a strike.
If you are really insisting that the only views that matter are inside views, well, that sounds more like religion than rational consideration.
If I did, why would I have replied to your outside view argument with another outside view argument?
If you had said “you hold inside view to be generally more accurate than outside view”, well yeah, I don’t think that’s disputed here.
How would it lead to being defeated by a different AGI? That’s not obvious for me.
I suspect that a hostile AGI will have no problem taking over a supercomputer and then staying dormant until the moment it has overwhelming advantage over the world. All there would be to notice would be an unexplained spike of activity one afternoon.
Q: What makes you think that?
A: We live in a complex world where successfully pulling off a plan that kills everyone and in a short of time might be beyond what is achievable, the same way that winning against AlphaZero giving it a 20 stone handicap is impossible even by a God- like entity with infinite computational resources
Still waiting to hear your arguments here. “It just might be impossible to pull off complex plan X” is just too vague a claim to discuss.
Of course, to anyone who has studied the question in depth, that’s a bad argument, but I’m trying to taylor my reply to someone who claims (direct quote of the first 2 sentences) being inclined to think that fear of rogue AI is a product of American culture if it doesn’t exist outside of the USA.
Nothing aggressive with noting that it’s a superficial factor. Maybe it would have come off better if I had use the LW term “outside view”, but it only came back to me now.
Yes, the Japanese don’t fear AIs as the Americans do. But also, most of the recent main progress in AI has been done in the Western world. It makes sense to me that the ones at the forefront of the technology are also the ones who spot dangers early on.
Also, since superficial factors have a sway on you (not a criticism, it’s a good heuristic if you don’t have much time/resources to spend on studying the subject deeper), the ones who show the most understanding of the topic and/or general competence by getting at the forefront should have bonus credibility, shouldn’t they?
Or better put, I can conceive many reasons why this plan fails.
Then could you produce a few of the main ones, to allow for examination?
Also, I don’t see how see build those factories in the first place and we can’t use that time window to make the AGI to produce explicit results on AGI safety
What’s the time window in your scenario? As I noted in a different comment, I can agree with “days” as you initially stated. That’s barely enough time for the EA community to notice there’s a problem.
I downvoted this post for the lack of arguments (besides the main argument from incredulity).
I am saying that I believe that an AGI could theoretically kill all humans because it is not only a matter of being very intelligent.
Typo? (could not kill all humans)
I might have missed it, but it seems to be the first time you talk about “months” in your scenario. Wasn’t it “days” before? It matters because I don’t think it would take months for an AGI to built a nanotech factory.
Can you verify code to be sure there’s no virus in it? It took years of trial and error to patch up some semblance of internet security. A single flaw in your nanotech factory is all a hostile AI would need.
The diversity of outlets that you desire sounds to journalism what diversity of products is for markets generally. It is generally agreed that free markets are more efficient than centralized planning. Why not do the same for media? It’s not like there’s a lack of independent or outsider funded media trying to survive while providing a different angle. But they’re not the targets of government funding. I don’t see how more funding could make it easier for those dissenting media to compete.
The EU already dictates a large part of the policy of its states, and the official media in said states are already massively pro-EU. What makes you think an EU owned media would be a good idea to correct that in the first place?
Ok but what’s the takeaway for us who do not know the context?
I can’t point you in a precise direction, but I’ve seen the idea showing up sporadically for more than a decade now. The current voting system is obviously absurd and the root cause of many problems, but the obstacle to change is not a lack of viable alternatives, nor a lack of clever people convinced that at least it’s worth trying. Alternative voting systems have been implemented and work well. For example, in France there was a website (Parlement et Citoyens) that allowed people to vote on individual laws, lay out arguments for and against, propose amendments. The vote of the (internet, French) people was surprisingly nuanced and well-argued. The problem is that this website was not connected to any actual exercise of power. A few members of the parliament showed interest in the site, pretended that they’d try to implement what the users decided, some might even have tried to do it, but in the end it barely made a ripple in the pond.
I assume it is (or will be) the same problem with delegated voting. Of course it’s worth trying, but if it’s not connected to any actual power, people are gonna feel cheated and will deem the idea a failure in the same stroke.
Can we expect another chapter? I want to know what happens next!
Actually I fully agree with that. I just have the impression that your choice of words suggested that Dave was being lazy or not fully honest, and I would disagree with that. I think he’s probably honestly laying his best arguments for what he truly believes.
Fair enough. If you don’t have the time/desire/ability to look at the alignment problem arguments in detail, going by “so far, all doomsday predictions turned out false” is a good, cheap, first-glance heuristic. Of course, if you eventually manage to get into the specifics of AGI alignment, you should discard that heuristic and instead let the (more direct) evidence guide your judgement.
Talking about predictions, there’s been an AI winter a few decades ago, when most predictions of rapid AI progress turned out completely wrong. But recently, it’s the opposite trend that dominates: it’s the predictions that downplay the progress of the capabilities of AI that turn out wrong. What does your model say you should conclude about that?
It’s nice that you’re open to betting. What unambiguous sign would change your mind, about the speed of AGI takeover, long enough before it happens that you’d still have time to make a positive impact afterwards? Nobody is interested in winning a bet where winning means “mankind gets wiped”.