In my limited experience, UCMA supporters explicitly rejected the assertion that “arguments” and “being convinced by an argument” are equivalent to “evidence” and “performing a bayesian update on evidence”. So those three would be enough for evidence and updates, but not enough for argument and persuasion according to my next-best-guess of what they mean by “argument” and “convinced”.
For one, you need some kind of input system, and some process that looks at this input and connects it to pieces of an internal model, which requires and internal model and some structure that sends signals from the input to the process, and some structure where the process has modification access to other parts of the mind (to form the connections and perform the edits) in some way.
Then you need something that represents beliefs, and some weighing or filtering system where the elements of the input are judged (compared to other nodes in the current beliefs) and then evaluated using a bunch of built-in or learned rules (which implies having some rules of logic built-in to the structure of the mind, or the ability to learn such rules, both of which are non-trivial complexity-wise), and then those evaluations organized in a way where it can be concluded whether the argument is sound or not, and the previous judgments of the elements integrated so that it can be concluded whether the premises are also good, and then the mind also requires this result to send a signal to some dynamic process in the brain that modus ponens the whole thing into using the links to the concepts and beliefs to update and edit them to the new values prescribed by the compelling argument.
Whew, that’s a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell. I sure hope we don’t live in the kind of weird universes where sentience necessarily implies or requires all of the above!
Which is where the Occam/SI comes in. All of the above is weird, very specific, and extremely complex in most machine designs I can think of. Sentience is itself complex, but doesn’t seem to require the above as far as we can tell. Positing that minds also require all these additional complexities seems like a very bad idea. Statistically, ‘A’ is always more likely than ‘A and B’. Positing UCMA is a bit akin to positing ‘A and B and C and Fk but not Re and not any of Ke through Kz and L1..273 except L22’.
In my limited experience, UCMA supporters explicitly rejected the assertion that “arguments” and “being convinced by an argument” are equivalent to “evidence” and “performing a bayesian update on evidence”.
Eh, for the UCMA arguments I’m familiar with, they would be happy to work within the (excellent) Solomonoff framework as long as you allowed for probabilities of 0 and 1. I get that this isn’t an unproblematic allowance, but nothing about the math actually requires us to exclude probabilities of 0 and 1 (so far as I understand it).
Whew, that’s a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell.
What is necessary? It’ll pay off for us to get this on the table.
What is necessary? It’ll pay off for us to get this on the table.
If we knew exactly, someone would have a nobel for it and the nonperson predicate would be a solved problem by now, along with the Hard Problem of Consciousness and a throng of other things currently puzzling scientists the world over.
However, we do have a general idea of the direction to take, with an example here of some of the things involved. There’s still the whole debate and questions around the so-called “hard problem of consciousness”, but overall it doesn’t even seem as if the ability to communicate is required for consciousness or sentience, let alone hold the ability to parse language in a form remotely close to ours or that allows anything akin to an argument as humans are used to the word.
But past that point, the argument is no longer about UCMAs, and becomes about morality engines (and whether morality or something akin to it must exist in all minds), consciousness, what constitutes an ‘argument’ and ‘being convinced’, and other things humans yet understand so very little about.
Okay, I see the problem. Let’s say this: within the whole of mind-space there is a subset of minds capable of morally-evaluable behavior. For all such minds, the UCMA is true. This may be a tiny fraction, but the UCMAist won’t be disturbed by that: no UCMAist would insist that the UCMA is UC for minds incapable of anything relevant to morality. How does that sound?
This sounds like a good way to avoid the heavyweight problems with all the consciousness debates, so it seems like a good idea.
However, it retains the problem of defining “morality”, which is still unresolved. UCMAists will argue from theories of morality where UC is an element of the theory, while E.Y. already assumes a different metaethics where there is no clear boundaries of human “morality” and where morality-in-the-way-we-understand-it is a feature of humans exclusively, and other things might have things akin to morality that are not morality, and some minds would be able to evaluate moral behaviors without caring about morality in the slightest, while some other minds we might consider morally-important and yet would completely ignore any “UCMA” that would otherwise compel any human.
In my limited experience, UCMA supporters explicitly rejected the assertion that “arguments” and “being convinced by an argument” are equivalent to “evidence” and “performing a bayesian update on evidence”. So those three would be enough for evidence and updates, but not enough for argument and persuasion according to my next-best-guess of what they mean by “argument” and “convinced”.
For one, you need some kind of input system, and some process that looks at this input and connects it to pieces of an internal model, which requires and internal model and some structure that sends signals from the input to the process, and some structure where the process has modification access to other parts of the mind (to form the connections and perform the edits) in some way.
Then you need something that represents beliefs, and some weighing or filtering system where the elements of the input are judged (compared to other nodes in the current beliefs) and then evaluated using a bunch of built-in or learned rules (which implies having some rules of logic built-in to the structure of the mind, or the ability to learn such rules, both of which are non-trivial complexity-wise), and then those evaluations organized in a way where it can be concluded whether the argument is sound or not, and the previous judgments of the elements integrated so that it can be concluded whether the premises are also good, and then the mind also requires this result to send a signal to some dynamic process in the brain that modus ponens the whole thing into using the links to the concepts and beliefs to update and edit them to the new values prescribed by the compelling argument.
Whew, that’s a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell. I sure hope we don’t live in the kind of weird universes where sentience necessarily implies or requires all of the above!
Which is where the Occam/SI comes in. All of the above is weird, very specific, and extremely complex in most machine designs I can think of. Sentience is itself complex, but doesn’t seem to require the above as far as we can tell. Positing that minds also require all these additional complexities seems like a very bad idea. Statistically, ‘A’ is always more likely than ‘A and B’. Positing UCMA is a bit akin to positing ‘A and B and C and Fk but not Re and not any of Ke through Kz and L1..273 except L22’.
Eh, for the UCMA arguments I’m familiar with, they would be happy to work within the (excellent) Solomonoff framework as long as you allowed for probabilities of 0 and 1. I get that this isn’t an unproblematic allowance, but nothing about the math actually requires us to exclude probabilities of 0 and 1 (so far as I understand it).
What is necessary? It’ll pay off for us to get this on the table.
If we knew exactly, someone would have a nobel for it and the nonperson predicate would be a solved problem by now, along with the Hard Problem of Consciousness and a throng of other things currently puzzling scientists the world over.
However, we do have a general idea of the direction to take, with an example here of some of the things involved. There’s still the whole debate and questions around the so-called “hard problem of consciousness”, but overall it doesn’t even seem as if the ability to communicate is required for consciousness or sentience, let alone hold the ability to parse language in a form remotely close to ours or that allows anything akin to an argument as humans are used to the word.
But past that point, the argument is no longer about UCMAs, and becomes about morality engines (and whether morality or something akin to it must exist in all minds), consciousness, what constitutes an ‘argument’ and ‘being convinced’, and other things humans yet understand so very little about.
Okay, I see the problem. Let’s say this: within the whole of mind-space there is a subset of minds capable of morally-evaluable behavior. For all such minds, the UCMA is true. This may be a tiny fraction, but the UCMAist won’t be disturbed by that: no UCMAist would insist that the UCMA is UC for minds incapable of anything relevant to morality. How does that sound?
This sounds like a good way to avoid the heavyweight problems with all the consciousness debates, so it seems like a good idea.
However, it retains the problem of defining “morality”, which is still unresolved. UCMAists will argue from theories of morality where UC is an element of the theory, while E.Y. already assumes a different metaethics where there is no clear boundaries of human “morality” and where morality-in-the-way-we-understand-it is a feature of humans exclusively, and other things might have things akin to morality that are not morality, and some minds would be able to evaluate moral behaviors without caring about morality in the slightest, while some other minds we might consider morally-important and yet would completely ignore any “UCMA” that would otherwise compel any human.