My (admittedly limited) knowledge of psychology and neurosciences suggests that this is not currently possible. Thankfully.
I feel like if you start seriously considering things that are themselves almost as bad as AI ruin in their implications in order to address potential AI ruin, you took a wrong turn somewhere. If you can create a virus or something of the sort that makes people genuinely afraid of some vague abstract thing, you can make them scared of anything at all. Do I really need to spell it out how that would be abused? On the other hand, do you really need to go that far? Launch media campaign and you can get most of the same results without making the world much more dystopian than it already is. The main risk here is that it’s easy to scare people so much that all of the research gets shut down. And I expect that to be the reason there’s not much scare about it in media yet. As far as I remember, that’s why most researcher in the field were at first reluctant to admit there’s a risk at all.
The main risk here is that it’s easy to scare people so much that all of the research gets shut down.
Why do you think that this is easy to do and bad. There are currently a small number of people warning about AI. There is some scary media stories, but not enough to really do much.
Do I really need to spell it out how that would be abused?
If the capability is there, the world has to deal with it, whoever first uses it. If the project is somewhat “use once, then burn all the notes”, then it wouldn’t make it much easier for anyone else to follow in their footsteps.
I feel like if you start seriously considering things that are themselves almost as bad as AI ruin in their implications in order to address potential AI ruin, you took a wrong turn somewhere.
Typical human priors are full of anthropomorphism when thinking about AI. Suppose you have something that has about the effect of some rationality training, of learning about and really understanding a few good arguments for AI risk. Yes the same tech could be used for horrible brainwashy purposes, but hopefully we can avoid giving the tech to people who would use it like that. The hopeful future being one where humanity develops advanced AI very cautiously, taking as long as it needs to get it right, and then has a glorious FAI future. This does not look “almost as bad as AI ruin” to me.
If the capability is there, the world has to deal with it, whoever first uses it. If the project is somewhat “use once, then burn all the notes”, then it wouldn’t make it much easier for anyone else to follow in their footsteps.
That’s true if capability is there already. If capability is maybe, possibly there but requires a lot of research to confirm the possibility and even more to get it going, I’d suggest that we might deal with it by acessing the risks and not going down that route. I mean, that’s precisely what this community seems to think about GoF research, how is that case different?
Why do you think that this is easy to do and bad. There are currently a small number of people warning about AI. There is some scary media stories, but not enough to really do much.
What I really was trying to say that if you have sufficient knowledge and resources to launch proper media campaign, it might be easy to overshoot your goal if that relates to scaring people. Why do I think it’s the case? Because modern media excels at being scary. And any story that gains traction can snowball out of control really quickly. And if it snowballs, most people are not going to hear or read your version of your arguments. They would get distorted, misunderstood and misrepresented version presented by journalists. That is a risk.
Yes the same tech could be used for horrible brainwashy purposes, but hopefully we can avoid giving the tech to people who would use it like that.
And how do you ensure that this tech does not get into the wrong hands? There are so, so many ways this can go wrong. What if your tech (or just necessary research) gets stolen? What if you are secretly hoping to use it for some other purpose? What if someone else on the team does that? Or more realistically, do you think that the moment CIA thinks that your plan is workable they don’t disappear you? That would be entirely consistent with their history and their goals. I don’t think that you are so naive to think you’d be able to hide that kind of research from them for long. I mean, you did not ask your questions in private. And of course, there are other parties that would be willing to go to any lengths to get that tech, CIA would not be alone in that.
I feel like risks here are much higher than potential benefits.
Probably not, but generally supporting antivaxers might achieve this as a side effect.
Actually, maybe we could make a drug that makes people afraid of drugs… for example, design a drug that is extremely useful, but also extremely painful… so the governments will force it on people, and most of them will decide “I am not taking a medicine ever again”.
My (admittedly limited) knowledge of psychology and neurosciences suggests that this is not currently possible. Thankfully.
I feel like if you start seriously considering things that are themselves almost as bad as AI ruin in their implications in order to address potential AI ruin, you took a wrong turn somewhere.
If you can create a virus or something of the sort that makes people genuinely afraid of some vague abstract thing, you can make them scared of anything at all. Do I really need to spell it out how that would be abused?
On the other hand, do you really need to go that far?
Launch media campaign and you can get most of the same results without making the world much more dystopian than it already is.
The main risk here is that it’s easy to scare people so much that all of the research gets shut down. And I expect that to be the reason there’s not much scare about it in media yet. As far as I remember, that’s why most researcher in the field were at first reluctant to admit there’s a risk at all.
Why do you think that this is easy to do and bad. There are currently a small number of people warning about AI. There is some scary media stories, but not enough to really do much.
If the capability is there, the world has to deal with it, whoever first uses it. If the project is somewhat “use once, then burn all the notes”, then it wouldn’t make it much easier for anyone else to follow in their footsteps.
Typical human priors are full of anthropomorphism when thinking about AI. Suppose you have something that has about the effect of some rationality training, of learning about and really understanding a few good arguments for AI risk. Yes the same tech could be used for horrible brainwashy purposes, but hopefully we can avoid giving the tech to people who would use it like that. The hopeful future being one where humanity develops advanced AI very cautiously, taking as long as it needs to get it right, and then has a glorious FAI future. This does not look “almost as bad as AI ruin” to me.
That’s true if capability is there already.
If capability is maybe, possibly there but requires a lot of research to confirm the possibility and even more to get it going, I’d suggest that we might deal with it by acessing the risks and not going down that route.
I mean, that’s precisely what this community seems to think about GoF research, how is that case different?
What I really was trying to say that if you have sufficient knowledge and resources to launch proper media campaign, it might be easy to overshoot your goal if that relates to scaring people.
Why do I think it’s the case?
Because modern media excels at being scary. And any story that gains traction can snowball out of control really quickly.
And if it snowballs, most people are not going to hear or read your version of your arguments.
They would get distorted, misunderstood and misrepresented version presented by journalists.
That is a risk.
And how do you ensure that this tech does not get into the wrong hands?
There are so, so many ways this can go wrong. What if your tech (or just necessary research) gets stolen? What if you are secretly hoping to use it for some other purpose? What if someone else on the team does that?
Or more realistically, do you think that the moment CIA thinks that your plan is workable they don’t disappear you? That would be entirely consistent with their history and their goals.
I don’t think that you are so naive to think you’d be able to hide that kind of research from them for long. I mean, you did not ask your questions in private.
And of course, there are other parties that would be willing to go to any lengths to get that tech, CIA would not be alone in that.
I feel like risks here are much higher than potential benefits.
Can we develop a drug that makes people afraid of people who suggest making drugs to make people afraid of something?
Probably not, but generally supporting antivaxers might achieve this as a side effect.
Actually, maybe we could make a drug that makes people afraid of drugs… for example, design a drug that is extremely useful, but also extremely painful… so the governments will force it on people, and most of them will decide “I am not taking a medicine ever again”.