Bifur should tell everyone that he is going to try to wake the Balrog, and dig directly towards it, openly advertising his intent to be the first to wake it. Spreading rumors that he intends to yoke the Balrog to a plow, and that he alone has a specific plan for accomplishing this would be helpful too.
The action taken to control the insanity of that one crazy dwarf might prevent the catastrophe outright.
Interesting. Should MIRI announce they have become negative utilitarians who think the universe contains more suffering than happiness and so they intend to great a paperclip maximizer?
Oog and Og are sitting in the forest. Oog says ‘man someone could build a fire and totally destroy this whole place, we need to devote a lot more energy to stove research before that happens.‘. Og says ‘sure, fire, OK, that’s sci-fi stuff, I’m going to go gather some berries while you waste your time with heat flow calculations and the stove safety problem’.
Oog doesn’t like being brushed off, so he decides to try a different tactic. Og returns to see Oog waving a brightly burning torch around in the air, dangerously close to the dry leaves and twigs of their shelter. Og’s reaction is far less skeptical than before: ‘Oog! You will kill us all, stop waving that thing around until we have at least a pit dug!’
So yes, the best thing you can do to popularize the cause of AI safety could be building an obviously unsafe AI and doing something demonstrably dangerous with it.
Bifur should tell everyone that he is going to try to wake the Balrog, and dig directly towards it, openly advertising his intent to be the first to wake it. Spreading rumors that he intends to yoke the Balrog to a plow, and that he alone has a specific plan for accomplishing this would be helpful too.
The action taken to control the insanity of that one crazy dwarf might prevent the catastrophe outright.
That action is likely to be involuntary commitment to a mental hospital. A clear hint that only crazies worry about balrogs.
Interesting. Should MIRI announce they have become negative utilitarians who think the universe contains more suffering than happiness and so they intend to great a paperclip maximizer?
Oog and Og are sitting in the forest. Oog says ‘man someone could build a fire and totally destroy this whole place, we need to devote a lot more energy to stove research before that happens.‘. Og says ‘sure, fire, OK, that’s sci-fi stuff, I’m going to go gather some berries while you waste your time with heat flow calculations and the stove safety problem’.
Oog doesn’t like being brushed off, so he decides to try a different tactic. Og returns to see Oog waving a brightly burning torch around in the air, dangerously close to the dry leaves and twigs of their shelter. Og’s reaction is far less skeptical than before: ‘Oog! You will kill us all, stop waving that thing around until we have at least a pit dug!’
So yes, the best thing you can do to popularize the cause of AI safety could be building an obviously unsafe AI and doing something demonstrably dangerous with it.