“Everything has been said, yet few have taken notice of it. Since all our knowledge is essentially banal, it can only be of value to minds that are not”
I’m having an extremely hard time understanding this quote. Its premises seem to contradict themselves.
How can a mind be original (not banal) if everything has been said and all knowledge is banal?
Only the set of beliefs that are actually routinely expressed can be considered banal; no matter if someone else has already said something, if it occurs to me organically, then it’s probably useful.
When evaluating the outputs of an algorithm, we have to consider the interestingness of the outputs under various counterfactuals. Otherwise, all game-theoretic agents are equivalent to either Cooperation Rock or Defection Rock, and all probabilities are either 0 or 1. And once you’re specifying outputs as a complete truth-table, you’re effectively specifying the algorithm.
To give a concrete example, I consider the server running lesswrong.com to be a banal mind, because it performs little computation of interest itself, even though when messages are sent from it to my computer those messages often contain very interesting ideas.
The interesting thing about minds is that they are able to produce interesting conjunctions of and inferences from, seemingly unrelated data/experiences. Minds appear to be more than the sum of their experiences. This ability appears to defy the best efforts of coders to parallel.
EDIT: This got voted down, perhaps because of the above: it may be worth me stating that I am not posing a ‘mysterious question’ - the key words are ‘appears to’ - in other words, this is an aspect which needs significant further work..
I consider almost all code ‘banal’, in that almost all code ‘performs little computation of interest’. Pavitra clearly distinguishes between ‘interest’ and ‘value’.
Surely one way of looking at AI research is that it is an attempt to produce code that is not banal?
The implication is that connections between data are made by minds, and that minds that are not banal can make new and interesting connections between data.
“Everything has been said, yet few have taken notice of it. Since all our knowledge is essentially banal, it can only be of value to minds that are not”
Raoul Vaneigem
I’m having an extremely hard time understanding this quote. Its premises seem to contradict themselves.
How can a mind be original (not banal) if everything has been said and all knowledge is banal?
Only the set of beliefs that are actually routinely expressed can be considered banal; no matter if someone else has already said something, if it occurs to me organically, then it’s probably useful.
Perhaps data is banal and code may be either banal or non-banal.
Yes, that would be necessary for the quote to make sense. However, I call a mind banal to the extent that its output is.
When evaluating the outputs of an algorithm, we have to consider the interestingness of the outputs under various counterfactuals. Otherwise, all game-theoretic agents are equivalent to either Cooperation Rock or Defection Rock, and all probabilities are either 0 or 1. And once you’re specifying outputs as a complete truth-table, you’re effectively specifying the algorithm.
To give a concrete example, I consider the server running lesswrong.com to be a banal mind, because it performs little computation of interest itself, even though when messages are sent from it to my computer those messages often contain very interesting ideas.
I see your point. I was presuming a human mind w/ the typical range of experiences available to it.
The interesting thing about minds is that they are able to produce interesting conjunctions of and inferences from, seemingly unrelated data/experiences. Minds appear to be more than the sum of their experiences. This ability appears to defy the best efforts of coders to parallel.
EDIT: This got voted down, perhaps because of the above: it may be worth me stating that I am not posing a ‘mysterious question’ - the key words are ‘appears to’ - in other words, this is an aspect which needs significant further work..
I consider almost all code ‘banal’, in that almost all code ‘performs little computation of interest’. Pavitra clearly distinguishes between ‘interest’ and ‘value’.
Surely one way of looking at AI research is that it is an attempt to produce code that is not banal?
The implication is that connections between data are made by minds, and that minds that are not banal can make new and interesting connections between data.