As silly as it is, the viral spread of deepfaked president memes and AI content would probably serve to inoculate the populace against serious disinformation—“oh, I’ve seen this already, these are easy to fake.”
If this stuff keeps up, the populace is going to need to be inoculated against physical killer robots, not naughty memes. And the immune response is going to need to be more in the line of pitchforks and torches than being able to say you’ve seen it before. Not that pitchforks and torches would help in the long term, but it might buy some time.
All of its suggestions are opposite of anything you might consider a good idea.
Why, indeed they are. They are also all things that major players are actually doing right this minute.
They’re not actually suggestions. They are observations.
If this stuff keeps up, the populace is going to need to be inoculated against physical killer robots, not naughty memes. And the immune response is going to need to be more in the line of pitchforks and torches than being able to say you’ve seen it before. Not that pitchforks and torches would help in the long term, but it might buy some time.
I’m going to ask LWers not to do this in real life, and to oppose any organization or individual that tries to use violence to slow down AI, for the same reason I get really worried around pivotal acts:
If you fail or you model the situation wrong, you can enter into a trap, and in general things are never so dire as to require pivotal acts like this.
Indeed, on the current path, AI Alignment is likely to be achieved before or during AGI getting real power, and it’s going to be a lot easier than LWers think.
I also don’t prioritize immunizing against disinformation. And of course this is a “haha, we’re all going to die” joke. I’m going to hope for an agentized virus including GPT4 calls, roaming the internet and saying the scariest possible stuff, without being quite smart enough to kill us all. That will learn em.
I’m not saying OpenAI is planning that. Or that they’re setting a good example. Just let’s hope for that.
We might just end up ruled by a perfect, unchallengable tyranny, whose policies are defined by a machine’s distorted interpretation of some unholy combination of the priorities of somebody’s “safety” department and somebody’s marketing department. Or, worse, by a machine faithfully enforcing the day to day decisions of those departments.
If this stuff keeps up, the populace is going to need to be inoculated against physical killer robots, not naughty memes. And the immune response is going to need to be more in the line of pitchforks and torches than being able to say you’ve seen it before. Not that pitchforks and torches would help in the long term, but it might buy some time.
Why, indeed they are. They are also all things that major players are actually doing right this minute.
They’re not actually suggestions. They are observations.
Yeah, it’s a joke, but it’s a bitter joke.
I’m going to ask LWers not to do this in real life, and to oppose any organization or individual that tries to use violence to slow down AI, for the same reason I get really worried around pivotal acts:
If you fail or you model the situation wrong, you can enter into a trap, and in general things are never so dire as to require pivotal acts like this.
Indeed, on the current path, AI Alignment is likely to be achieved before or during AGI getting real power, and it’s going to be a lot easier than LWers think.
I also don’t prioritize immunizing against disinformation. And of course this is a “haha, we’re all going to die” joke. I’m going to hope for an agentized virus including GPT4 calls, roaming the internet and saying the scariest possible stuff, without being quite smart enough to kill us all. That will learn em.
I’m not saying OpenAI is planning that. Or that they’re setting a good example. Just let’s hope for that.
Ah, c’mon. We’re not necessarily going to die.
We might just end up ruled by a perfect, unchallengable tyranny, whose policies are defined by a machine’s distorted interpretation of some unholy combination of the priorities of somebody’s “safety” department and somebody’s marketing department. Or, worse, by a machine faithfully enforcing the day to day decisions of those departments.