The claim that all squares are rectangles is unlikely to be true of all squares.
This is analogous to the conclusion of the above argument, not P1. An analogue to P1 would have to be something like ‘Any argument of the form ‘for all squares s:(X)s is unlikely to be true.’ The question would then be this: does this analogue of P1 count as an argument of the form ‘s:(X)s’? That is, does it quantify over all squares?
You might think it doesn’t, since it just talks about arguments. But my point isn’t quite that it must count as such an argument, but rather that it must count as an argument of the same form as P2 (whatever that might be). The reason is that P2 is not like ‘all squares are rectangles’. If it were, P2 would be a (purportedly) universally compelling moral argument. But P2 is rather the claim that there is such an argument. P2 is ‘for all minds m:(Moral Argument X is compelling)m’.
I see what you’re talking about. My confusion originates in your definition of P2, rather than P1, where I thought the confusion was originated.
Suppose two minds, A and B. A has some function for determining truth, let’s call it T. Mind B, on the other hand, is running an emulation of mind A, and its truth function is not(T).
Okay, yes, this is an utterly pedantic kind of argument, but I think it demonstrates that in -all- of mindspace, it’s impossible to have any universally compelling argument, without relying on balancing two infinities (number of possible arguments and number of possible minds) against each other and declaring a winner.
That sounds pretty good to me, though I think it’s an open question whether or not what you’re talking about is possible. That is, a UCMA theorist would accuse you of begging the question if you assumed at the outset that the above is a possibility.
It’s only an open question insofar as what are considered “minds” and “arguments” remain shrouded in mystery.
I’m rather certain that for a non-negligible fraction of all minds, the entire concept of “arguments” is nonsensical. There is, after all, no possible combination of inputs (or “arguments”), that will make the function “SomeMind(): Print 3” output that it is immoral to tube-feed chicken.
Because of my experience with programming and working with computation, I find it extremely unlikely that, out of all possible things, the specific way humans conceptualize persuasion and arguments would be a necessary requirement for any “mind” (which I take here as a ‘sentient’ algorithm in the largest sense) to function.
If the way we process these things called “arguments” is not a requirement for a mind, then there almost certainly exists at least one logically-possible mind design which does not have this way of processing things we call “arguments”.
As another intuition, if we adopt the Occam/Solomonoff philosophy for what is required to have a “mind”, then something as complicated as the process of understanding arguments, being affected, influenced or persuaded by them, by running through filters and comparing with prior knowledge and so on until some arguments convince or do not convince… that must be required for all possible minds as a component of an already-complex system called “minds”… sounds extremely much less common in the realm of all possible universes than the universes where simpler minds exist that do not have this property of understanding arguments and being moved by them.
I find it extremely unlikely that, out of all possible things, the specific way humans conceptualize persuasion and arguments would be a necessary requirement for any “mind” (which I take here as a ‘sentient’ algorithm in the largest sense) to function.
I don’t have any experience with programming at all, and that may be the problem: I just don’t see these reasons. To my mind (ha) a mind incapable of processing arguments, which is to say holding reasons in rational relations to each other or connecting premises and conclusions up in justificatory relations or whatever, isn’t reasonably called a mind. This may just be a failure of imagination on my part So...
As another intuition, if we adopt the Occam/Solomonoff philosophy for what is required to have a “mind”
Could you explain this? I’m under the impression that being capable of solomonoff induction requires being capable of 1) holding beliefs, 2) making inferences about those beliefs, 3) changing beliefs. Yet this seems to me to be all that is required for ‘understanding and being convinced by an argument’.
In my limited experience, UCMA supporters explicitly rejected the assertion that “arguments” and “being convinced by an argument” are equivalent to “evidence” and “performing a bayesian update on evidence”. So those three would be enough for evidence and updates, but not enough for argument and persuasion according to my next-best-guess of what they mean by “argument” and “convinced”.
For one, you need some kind of input system, and some process that looks at this input and connects it to pieces of an internal model, which requires and internal model and some structure that sends signals from the input to the process, and some structure where the process has modification access to other parts of the mind (to form the connections and perform the edits) in some way.
Then you need something that represents beliefs, and some weighing or filtering system where the elements of the input are judged (compared to other nodes in the current beliefs) and then evaluated using a bunch of built-in or learned rules (which implies having some rules of logic built-in to the structure of the mind, or the ability to learn such rules, both of which are non-trivial complexity-wise), and then those evaluations organized in a way where it can be concluded whether the argument is sound or not, and the previous judgments of the elements integrated so that it can be concluded whether the premises are also good, and then the mind also requires this result to send a signal to some dynamic process in the brain that modus ponens the whole thing into using the links to the concepts and beliefs to update and edit them to the new values prescribed by the compelling argument.
Whew, that’s a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell. I sure hope we don’t live in the kind of weird universes where sentience necessarily implies or requires all of the above!
Which is where the Occam/SI comes in. All of the above is weird, very specific, and extremely complex in most machine designs I can think of. Sentience is itself complex, but doesn’t seem to require the above as far as we can tell. Positing that minds also require all these additional complexities seems like a very bad idea. Statistically, ‘A’ is always more likely than ‘A and B’. Positing UCMA is a bit akin to positing ‘A and B and C and Fk but not Re and not any of Ke through Kz and L1..273 except L22’.
In my limited experience, UCMA supporters explicitly rejected the assertion that “arguments” and “being convinced by an argument” are equivalent to “evidence” and “performing a bayesian update on evidence”.
Eh, for the UCMA arguments I’m familiar with, they would be happy to work within the (excellent) Solomonoff framework as long as you allowed for probabilities of 0 and 1. I get that this isn’t an unproblematic allowance, but nothing about the math actually requires us to exclude probabilities of 0 and 1 (so far as I understand it).
Whew, that’s a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell.
What is necessary? It’ll pay off for us to get this on the table.
What is necessary? It’ll pay off for us to get this on the table.
If we knew exactly, someone would have a nobel for it and the nonperson predicate would be a solved problem by now, along with the Hard Problem of Consciousness and a throng of other things currently puzzling scientists the world over.
However, we do have a general idea of the direction to take, with an example here of some of the things involved. There’s still the whole debate and questions around the so-called “hard problem of consciousness”, but overall it doesn’t even seem as if the ability to communicate is required for consciousness or sentience, let alone hold the ability to parse language in a form remotely close to ours or that allows anything akin to an argument as humans are used to the word.
But past that point, the argument is no longer about UCMAs, and becomes about morality engines (and whether morality or something akin to it must exist in all minds), consciousness, what constitutes an ‘argument’ and ‘being convinced’, and other things humans yet understand so very little about.
Okay, I see the problem. Let’s say this: within the whole of mind-space there is a subset of minds capable of morally-evaluable behavior. For all such minds, the UCMA is true. This may be a tiny fraction, but the UCMAist won’t be disturbed by that: no UCMAist would insist that the UCMA is UC for minds incapable of anything relevant to morality. How does that sound?
This sounds like a good way to avoid the heavyweight problems with all the consciousness debates, so it seems like a good idea.
However, it retains the problem of defining “morality”, which is still unresolved. UCMAists will argue from theories of morality where UC is an element of the theory, while E.Y. already assumes a different metaethics where there is no clear boundaries of human “morality” and where morality-in-the-way-we-understand-it is a feature of humans exclusively, and other things might have things akin to morality that are not morality, and some minds would be able to evaluate moral behaviors without caring about morality in the slightest, while some other minds we might consider morally-important and yet would completely ignore any “UCMA” that would otherwise compel any human.
This is analogous to the conclusion of the above argument, not P1. An analogue to P1 would have to be something like ‘Any argument of the form ‘for all squares s:(X)s is unlikely to be true.’ The question would then be this: does this analogue of P1 count as an argument of the form ‘s:(X)s’? That is, does it quantify over all squares?
You might think it doesn’t, since it just talks about arguments. But my point isn’t quite that it must count as such an argument, but rather that it must count as an argument of the same form as P2 (whatever that might be). The reason is that P2 is not like ‘all squares are rectangles’. If it were, P2 would be a (purportedly) universally compelling moral argument. But P2 is rather the claim that there is such an argument. P2 is ‘for all minds m:(Moral Argument X is compelling)m’.
I see what you’re talking about. My confusion originates in your definition of P2, rather than P1, where I thought the confusion was originated.
Suppose two minds, A and B. A has some function for determining truth, let’s call it T. Mind B, on the other hand, is running an emulation of mind A, and its truth function is not(T).
Okay, yes, this is an utterly pedantic kind of argument, but I think it demonstrates that in -all- of mindspace, it’s impossible to have any universally compelling argument, without relying on balancing two infinities (number of possible arguments and number of possible minds) against each other and declaring a winner.
That sounds pretty good to me, though I think it’s an open question whether or not what you’re talking about is possible. That is, a UCMA theorist would accuse you of begging the question if you assumed at the outset that the above is a possibility.
It’s only an open question insofar as what are considered “minds” and “arguments” remain shrouded in mystery.
I’m rather certain that for a non-negligible fraction of all minds, the entire concept of “arguments” is nonsensical. There is, after all, no possible combination of inputs (or “arguments”), that will make the function “SomeMind(): Print 3” output that it is immoral to tube-feed chicken.
Why are you certain of this?
Because of my experience with programming and working with computation, I find it extremely unlikely that, out of all possible things, the specific way humans conceptualize persuasion and arguments would be a necessary requirement for any “mind” (which I take here as a ‘sentient’ algorithm in the largest sense) to function.
If the way we process these things called “arguments” is not a requirement for a mind, then there almost certainly exists at least one logically-possible mind design which does not have this way of processing things we call “arguments”.
As another intuition, if we adopt the Occam/Solomonoff philosophy for what is required to have a “mind”, then something as complicated as the process of understanding arguments, being affected, influenced or persuaded by them, by running through filters and comparing with prior knowledge and so on until some arguments convince or do not convince… that must be required for all possible minds as a component of an already-complex system called “minds”… sounds extremely much less common in the realm of all possible universes than the universes where simpler minds exist that do not have this property of understanding arguments and being moved by them.
I don’t have any experience with programming at all, and that may be the problem: I just don’t see these reasons. To my mind (ha) a mind incapable of processing arguments, which is to say holding reasons in rational relations to each other or connecting premises and conclusions up in justificatory relations or whatever, isn’t reasonably called a mind. This may just be a failure of imagination on my part So...
Could you explain this? I’m under the impression that being capable of solomonoff induction requires being capable of 1) holding beliefs, 2) making inferences about those beliefs, 3) changing beliefs. Yet this seems to me to be all that is required for ‘understanding and being convinced by an argument’.
In my limited experience, UCMA supporters explicitly rejected the assertion that “arguments” and “being convinced by an argument” are equivalent to “evidence” and “performing a bayesian update on evidence”. So those three would be enough for evidence and updates, but not enough for argument and persuasion according to my next-best-guess of what they mean by “argument” and “convinced”.
For one, you need some kind of input system, and some process that looks at this input and connects it to pieces of an internal model, which requires and internal model and some structure that sends signals from the input to the process, and some structure where the process has modification access to other parts of the mind (to form the connections and perform the edits) in some way.
Then you need something that represents beliefs, and some weighing or filtering system where the elements of the input are judged (compared to other nodes in the current beliefs) and then evaluated using a bunch of built-in or learned rules (which implies having some rules of logic built-in to the structure of the mind, or the ability to learn such rules, both of which are non-trivial complexity-wise), and then those evaluations organized in a way where it can be concluded whether the argument is sound or not, and the previous judgments of the elements integrated so that it can be concluded whether the premises are also good, and then the mind also requires this result to send a signal to some dynamic process in the brain that modus ponens the whole thing into using the links to the concepts and beliefs to update and edit them to the new values prescribed by the compelling argument.
Whew, that’s a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell. I sure hope we don’t live in the kind of weird universes where sentience necessarily implies or requires all of the above!
Which is where the Occam/SI comes in. All of the above is weird, very specific, and extremely complex in most machine designs I can think of. Sentience is itself complex, but doesn’t seem to require the above as far as we can tell. Positing that minds also require all these additional complexities seems like a very bad idea. Statistically, ‘A’ is always more likely than ‘A and B’. Positing UCMA is a bit akin to positing ‘A and B and C and Fk but not Re and not any of Ke through Kz and L1..273 except L22’.
Eh, for the UCMA arguments I’m familiar with, they would be happy to work within the (excellent) Solomonoff framework as long as you allowed for probabilities of 0 and 1. I get that this isn’t an unproblematic allowance, but nothing about the math actually requires us to exclude probabilities of 0 and 1 (so far as I understand it).
What is necessary? It’ll pay off for us to get this on the table.
If we knew exactly, someone would have a nobel for it and the nonperson predicate would be a solved problem by now, along with the Hard Problem of Consciousness and a throng of other things currently puzzling scientists the world over.
However, we do have a general idea of the direction to take, with an example here of some of the things involved. There’s still the whole debate and questions around the so-called “hard problem of consciousness”, but overall it doesn’t even seem as if the ability to communicate is required for consciousness or sentience, let alone hold the ability to parse language in a form remotely close to ours or that allows anything akin to an argument as humans are used to the word.
But past that point, the argument is no longer about UCMAs, and becomes about morality engines (and whether morality or something akin to it must exist in all minds), consciousness, what constitutes an ‘argument’ and ‘being convinced’, and other things humans yet understand so very little about.
Okay, I see the problem. Let’s say this: within the whole of mind-space there is a subset of minds capable of morally-evaluable behavior. For all such minds, the UCMA is true. This may be a tiny fraction, but the UCMAist won’t be disturbed by that: no UCMAist would insist that the UCMA is UC for minds incapable of anything relevant to morality. How does that sound?
This sounds like a good way to avoid the heavyweight problems with all the consciousness debates, so it seems like a good idea.
However, it retains the problem of defining “morality”, which is still unresolved. UCMAists will argue from theories of morality where UC is an element of the theory, while E.Y. already assumes a different metaethics where there is no clear boundaries of human “morality” and where morality-in-the-way-we-understand-it is a feature of humans exclusively, and other things might have things akin to morality that are not morality, and some minds would be able to evaluate moral behaviors without caring about morality in the slightest, while some other minds we might consider morally-important and yet would completely ignore any “UCMA” that would otherwise compel any human.