AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter. If FAI is important, only people who are working on FAI can be expected to produce positive outcomes with any significant probability.
AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter.
“Trying to” normally implies intent.
I’ll grant that someone working on AGI (or even narrower AI) who has become aware of the Friendliness problem, but doesn’t believe it is an actual threat, could be viewed as irresponsible—unless they have reasoned grounds to doubt that their creation would be dangerous.
Even so, “trying to destroy the world” strikes me as hyperbole. People don’t typically say that the Project Manhattan scientists were “trying to destroy the world” even though some of them thought there was an outside chance it would do just that.
On the other hand, the Teller report on atmosphere ignition should be kept in mind by anyone tempted to think “nah, those AI scientists wouldn’t go ahead with their plans if they thought there was even the slimmest chance of killing everyone”.
I think machine intelligence is a problem which is capable of being subdivded.
Some people can work on one part of the problem, while others work on other bits. Not all parts of the problem have much to do with values—e.g. see—this quote:
In many respects, prediction is a central core problem for those interested
in synthesising intelligence. If we could predict the future, it would help us
to solve many of our problems. Also, the problem has nothing to do with values.
It is an abstract math problem that can be relatively simply stated.
The problem is closely related to the one of building a good quality
universal compression algorithm.
AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter. If FAI is important, only people who are working on FAI can be expected to produce positive outcomes with any significant probability.
“Trying to” normally implies intent.
I’ll grant that someone working on AGI (or even narrower AI) who has become aware of the Friendliness problem, but doesn’t believe it is an actual threat, could be viewed as irresponsible—unless they have reasoned grounds to doubt that their creation would be dangerous.
Even so, “trying to destroy the world” strikes me as hyperbole. People don’t typically say that the Project Manhattan scientists were “trying to destroy the world” even though some of them thought there was an outside chance it would do just that.
On the other hand, the Teller report on atmosphere ignition should be kept in mind by anyone tempted to think “nah, those AI scientists wouldn’t go ahead with their plans if they thought there was even the slimmest chance of killing everyone”.
I think machine intelligence is a problem which is capable of being subdivded.
Some people can work on one part of the problem, while others work on other bits. Not all parts of the problem have much to do with values—e.g. see—this quote: