And if it just accelerates the doom, weren’t we still doomed?
Again, unclear. Elsewhere on this post, Vanessa Kosoy comments that she thinks there’s a 30% chance of success. The more you accelerate the creation of unsafe AI, the less time Vanessa has to work, and so presumably the lower her chance is; perhaps attracting more talent helps, but past efforts in this direction don’t seem to have obviously been worth it.
Like, we talk a lot about ‘differential tech development’, or developing technology that makes it easier to create an aligned AGI more than it makes it easier to create an unaligned AGI. It would be really nice if “communicate to people about the risk” was this sort of thing—it really seemed like it should be, on first glance—but in retrospect I’m really not sure that it was, and moderately pessimistic about future efforts that aren’t carefully targeted.
People should and deserve to know the probable future.
I mean, it’s not like we hid the arguments; we posted them on the public internet, I got published in a magazine arguing about it, Eliezer went on the Sam Harris podcast and talked about it, Nick Bostrom wrote a book about it, went on Joe Rogan and talked about it, and talked to the UK government about it. Heck, someone (in an attempt to blackmail MIRI about LessWrong moderation decisions, as far as I can tell) convinced Glenn Beck that AGI was dangerous, mostly in an attempt to discredit it by association.
Like, I would be way more sympathetic to this line of argumentation if it engaged with past efforts and the various balances that everyone was trying to strike, rather than just saying “well, it didn’t work, have you tried more?”
Like, the Slaughterbots campaign was ‘more’, and most AI Alignment people I talk to think it was a bad idea because it probably alienated the military, and made it harder to win in worlds where the first close AGI project is a military project, without clearly delaying the timeline for either AGI or lethal autonomous weapons.
Similarly, I think you have to make a choice between ‘fomenting social revolt’ and ‘working with the technocratic apparatus’, and I haven’t seen much in the way of successful social revolt in developed countries recently. [If the Canadian trucker convoy had been protesting the introduction of autonomous trucks instead of vaccine mandates, would it have been any more successful? What if it was just ‘getting rid of algorithms’, which is what seems to be how simpler versions of these sorts of arguments come out when actually pushed thru the government?
You have to admit that getting rid of algorithms would be a great help though. We might still be allowed to do some on paper, but that surely couldn’t be too dangerous?
Again, unclear. Elsewhere on this post, Vanessa Kosoy comments that she thinks there’s a 30% chance of success. The more you accelerate the creation of unsafe AI, the less time Vanessa has to work, and so presumably the lower her chance is; perhaps attracting more talent helps, but past efforts in this direction don’t seem to have obviously been worth it.
Like, we talk a lot about ‘differential tech development’, or developing technology that makes it easier to create an aligned AGI more than it makes it easier to create an unaligned AGI. It would be really nice if “communicate to people about the risk” was this sort of thing—it really seemed like it should be, on first glance—but in retrospect I’m really not sure that it was, and moderately pessimistic about future efforts that aren’t carefully targeted.
I mean, it’s not like we hid the arguments; we posted them on the public internet, I got published in a magazine arguing about it, Eliezer went on the Sam Harris podcast and talked about it, Nick Bostrom wrote a book about it, went on Joe Rogan and talked about it, and talked to the UK government about it. Heck, someone (in an attempt to blackmail MIRI about LessWrong moderation decisions, as far as I can tell) convinced Glenn Beck that AGI was dangerous, mostly in an attempt to discredit it by association.
Like, I would be way more sympathetic to this line of argumentation if it engaged with past efforts and the various balances that everyone was trying to strike, rather than just saying “well, it didn’t work, have you tried more?”
Like, the Slaughterbots campaign was ‘more’, and most AI Alignment people I talk to think it was a bad idea because it probably alienated the military, and made it harder to win in worlds where the first close AGI project is a military project, without clearly delaying the timeline for either AGI or lethal autonomous weapons.
Similarly, I think you have to make a choice between ‘fomenting social revolt’ and ‘working with the technocratic apparatus’, and I haven’t seen much in the way of successful social revolt in developed countries recently. [If the Canadian trucker convoy had been protesting the introduction of autonomous trucks instead of vaccine mandates, would it have been any more successful? What if it was just ‘getting rid of algorithms’, which is what seems to be how simpler versions of these sorts of arguments come out when actually pushed thru the government?
You have to admit that getting rid of algorithms would be a great help though. We might still be allowed to do some on paper, but that surely couldn’t be too dangerous?