When evaluating the outputs of an algorithm, we have to consider the interestingness of the outputs under various counterfactuals. Otherwise, all game-theoretic agents are equivalent to either Cooperation Rock or Defection Rock, and all probabilities are either 0 or 1. And once you’re specifying outputs as a complete truth-table, you’re effectively specifying the algorithm.
To give a concrete example, I consider the server running lesswrong.com to be a banal mind, because it performs little computation of interest itself, even though when messages are sent from it to my computer those messages often contain very interesting ideas.
The interesting thing about minds is that they are able to produce interesting conjunctions of and inferences from, seemingly unrelated data/experiences. Minds appear to be more than the sum of their experiences. This ability appears to defy the best efforts of coders to parallel.
EDIT: This got voted down, perhaps because of the above: it may be worth me stating that I am not posing a ‘mysterious question’ - the key words are ‘appears to’ - in other words, this is an aspect which needs significant further work..
I consider almost all code ‘banal’, in that almost all code ‘performs little computation of interest’. Pavitra clearly distinguishes between ‘interest’ and ‘value’.
Surely one way of looking at AI research is that it is an attempt to produce code that is not banal?
Perhaps data is banal and code may be either banal or non-banal.
Yes, that would be necessary for the quote to make sense. However, I call a mind banal to the extent that its output is.
When evaluating the outputs of an algorithm, we have to consider the interestingness of the outputs under various counterfactuals. Otherwise, all game-theoretic agents are equivalent to either Cooperation Rock or Defection Rock, and all probabilities are either 0 or 1. And once you’re specifying outputs as a complete truth-table, you’re effectively specifying the algorithm.
To give a concrete example, I consider the server running lesswrong.com to be a banal mind, because it performs little computation of interest itself, even though when messages are sent from it to my computer those messages often contain very interesting ideas.
I see your point. I was presuming a human mind w/ the typical range of experiences available to it.
The interesting thing about minds is that they are able to produce interesting conjunctions of and inferences from, seemingly unrelated data/experiences. Minds appear to be more than the sum of their experiences. This ability appears to defy the best efforts of coders to parallel.
EDIT: This got voted down, perhaps because of the above: it may be worth me stating that I am not posing a ‘mysterious question’ - the key words are ‘appears to’ - in other words, this is an aspect which needs significant further work..
I consider almost all code ‘banal’, in that almost all code ‘performs little computation of interest’. Pavitra clearly distinguishes between ‘interest’ and ‘value’.
Surely one way of looking at AI research is that it is an attempt to produce code that is not banal?