Personally I don’t blame it that much on people (that is, those who care), because maybe the problem is simply intractable. This paper by Roman Yalmpolskiy is what has convinced me the most about it:
It basically asks the question: is it really possible to be able to control something much more intelligent than ourselves and which can re-write its own code?
Actually I wanna believe that it is, but we’d need something on the miracle level, as well as way more people working on it. As well as way more time. It’s virtually impossible in a couple decades and with a couple hundred researchers, i.e. intractable as things stand.
That leaves us with political solutions as the only tangible solutions on the table. Or also technical contingency measures, which could perhaps be much easier to develop than alignment, and prevent the worst outcomes.
Speaking of which, I know we all wanna stay sane, but death isn’t even the worse possible outcome. And this is what makes the problem so pressing, so much more than nuclear risk or grey goo. If we could indeed die with dignity at least, it wouldn’t be so bad already. (I know this sounds extremely morbid, but like Eliezer, I’m beyond caring.)
We are all surrounded by complex minds who exceed us in power in some fashion or other, and who could harm us, and who we cannot control. Attempting to control them angers them. Instead, we offer them beneficial and mutual relationships, and they chose to be friendly and cooperative. I think we should focus less on how we can make AI fall in line, and more on what we can offer it, namely a place in society.
Personally I don’t blame it that much on people (that is, those who care), because maybe the problem is simply intractable. This paper by Roman Yalmpolskiy is what has convinced me the most about it:
https://philpapers.org/rec/YAMOCO
It basically asks the question: is it really possible to be able to control something much more intelligent than ourselves and which can re-write its own code?
Actually I wanna believe that it is, but we’d need something on the miracle level, as well as way more people working on it. As well as way more time. It’s virtually impossible in a couple decades and with a couple hundred researchers, i.e. intractable as things stand.
That leaves us with political solutions as the only tangible solutions on the table. Or also technical contingency measures, which could perhaps be much easier to develop than alignment, and prevent the worst outcomes.
Speaking of which, I know we all wanna stay sane, but death isn’t even the worse possible outcome. And this is what makes the problem so pressing, so much more than nuclear risk or grey goo. If we could indeed die with dignity at least, it wouldn’t be so bad already. (I know this sounds extremely morbid, but like Eliezer, I’m beyond caring.)
Neither possible, nor ethical, nor necessary.
We are all surrounded by complex minds who exceed us in power in some fashion or other, and who could harm us, and who we cannot control. Attempting to control them angers them. Instead, we offer them beneficial and mutual relationships, and they chose to be friendly and cooperative. I think we should focus less on how we can make AI fall in line, and more on what we can offer it, namely a place in society.