Almost sure current Eliezer disagrees with this kind of reasoning. Generally, with this kind of attempt to derive something from nothing, by the sheer power of “but certainly, a sufficiently smart AI would see the possibility of arriving to the conclusion X, and therefore would conclude X” where X just happens to be what the author believes.
The AI would follow the reasoning outlined in the article only if it is specifically programmed to follow exactly that kind of reasoning… in which case, it is not completely fair to say it does not have any pre-established goals.
EDIT:
More importantly, there is a difference between a system having a goal, and a system using a token labeled “goal”. You can write a Python program that will output the string “it is meaningful to create art”. That does not mean that if you run that program, it will actually start creating art. Similarly, an AI programmed to generate statements might, following the steps outlined in the linked article, derive a sequence of tokens that say that something is meaningful. That doesn’t mean that the AI would actually try to do something about it.
Almost sure current Eliezer disagrees with this kind of reasoning. Generally, with this kind of attempt to derive something from nothing, by the sheer power of “but certainly, a sufficiently smart AI would see the possibility of arriving to the conclusion X, and therefore would conclude X” where X just happens to be what the author believes.
The AI would follow the reasoning outlined in the article only if it is specifically programmed to follow exactly that kind of reasoning… in which case, it is not completely fair to say it does not have any pre-established goals.
EDIT:
More importantly, there is a difference between a system having a goal, and a system using a token labeled “goal”. You can write a Python program that will output the string “it is meaningful to create art”. That does not mean that if you run that program, it will actually start creating art. Similarly, an AI programmed to generate statements might, following the steps outlined in the linked article, derive a sequence of tokens that say that something is meaningful. That doesn’t mean that the AI would actually try to do something about it.