Exactly. But you can come up with an much harsher example than aimlessly
driving a car around:
In general it seems like destroying all other agents with potentially
different optimization criteria would be have instrumental value, however,
killing other people isn’t, in general, right, even if, say, they’re
your political adversaries.
And again, I bet Roko didn’t even consider “destroy all other agents” as a
candidate UIV because of anthropomorphic optimism.
Incidentally Eliezer, is this really worth your time?
I thought the main purpose of your taking time off AI research to write
overcoming bias was to write something to get potential AI programmers to
start training themselves. Do you predict that any of the people we will
eventually hire will have clung to a mistake like this one despite reading
through all of your previous series of posts on morality?
I’m just worried that arguing of this sort can become a Lost Purpose.
Incidentally Eliezer, is this really worth your time?
This comment might have caused a tremendous loss of value, if Eliezer took Marcello’s words seriously here and so stopped talking about his metaethics. As Luke points out here, despite all the ink spilled, very few seemed to have gotten the point (at least, from only reading him).
I’ve personally had to re-read it many times over, years apart even, and I’m still not sure I fully understand it. It’s also been the most personally valuable sequence, the sole cause of significant fundamental updates. (The other sequences seemed mostly obvious—which made them more suitable as just incredibly clear references, sometimes if only to send to others.)
Exactly. But you can come up with an much harsher example than aimlessly driving a car around:
In general it seems like destroying all other agents with potentially different optimization criteria would be have instrumental value, however, killing other people isn’t, in general, right, even if, say, they’re your political adversaries.
And again, I bet Roko didn’t even consider “destroy all other agents” as a candidate UIV because of anthropomorphic optimism.
Incidentally Eliezer, is this really worth your time?
I thought the main purpose of your taking time off AI research to write overcoming bias was to write something to get potential AI programmers to start training themselves. Do you predict that any of the people we will eventually hire will have clung to a mistake like this one despite reading through all of your previous series of posts on morality?
I’m just worried that arguing of this sort can become a Lost Purpose.
This comment might have caused a tremendous loss of value, if Eliezer took Marcello’s words seriously here and so stopped talking about his metaethics. As Luke points out here, despite all the ink spilled, very few seemed to have gotten the point (at least, from only reading him).
I’ve personally had to re-read it many times over, years apart even, and I’m still not sure I fully understand it. It’s also been the most personally valuable sequence, the sole cause of significant fundamental updates. (The other sequences seemed mostly obvious—which made them more suitable as just incredibly clear references, sometimes if only to send to others.)
I’m sad that there isn’t more.
If there is an urgent need to actually build safe AI, as was widely believed 10+ years ago, Marcello’s comment makes sense .