Universal cooperation on all system levels means total optimisation in the universe as a neural network, and indeed this can be a “goal” (unattainable, though), but the maximum gradient according to this loss function doesn’t necessarily mean removing optimisation frustrations with particular subsystems (humans and the society) first, or at all. Especially if the AI takes panpsychism seriously.
constrain its aesthetic-structure values to apply to only a finite amount of the universe’s negentropy
I don’t understand this phrase. (Neg)entropy) is a numeric property of a physical system (including the whole universe), that is, a number. What does it mean to apply something to a “limited amount” of it?
I mean that we can assign a particular block of matter, as priced by the amount of negentropy it contains, to a computation trajectory (eg, a person, or an ai). that is, we would fuel the ai with that amount of unspent energy.
can you clarify what you mean by the comparison to the universe as a neural network? I’m having trouble understanding the paper due to insufficient physics background, but it seems like it’s not drawing a very coherent connection. I do think there’s a connection to be drawn, but I’m extremely suspicious about whether this is the correct one.
Universal cooperation on all system levels means total optimisation in the universe as a neural network, and indeed this can be a “goal” (unattainable, though), but the maximum gradient according to this loss function doesn’t necessarily mean removing optimisation frustrations with particular subsystems (humans and the society) first, or at all. Especially if the AI takes panpsychism seriously.
I don’t understand this phrase. (Neg)entropy) is a numeric property of a physical system (including the whole universe), that is, a number. What does it mean to apply something to a “limited amount” of it?
I mean that we can assign a particular block of matter, as priced by the amount of negentropy it contains, to a computation trajectory (eg, a person, or an ai). that is, we would fuel the ai with that amount of unspent energy.
can you clarify what you mean by the comparison to the universe as a neural network? I’m having trouble understanding the paper due to insufficient physics background, but it seems like it’s not drawing a very coherent connection. I do think there’s a connection to be drawn, but I’m extremely suspicious about whether this is the correct one.