(...) if it’s supported by argument or evidence, but if it is, then it’s no mere assumption.
I do think it is supported by arguments/reasoning, so I don’t think of it as an “axiomatic” assumption.
A follow-up to that (not from you specifically) might be “what arguments?”. And—well, I think I pointed to some of my reasoning in various comments (some of them under deleted posts). Maybe I could have explained my thinking/perspective better (even if I wouldn’t be able to explain it in a way that’s universally compelling 🙃). But it’s not a trivial task to discuss these sorts of issues, and I’m trying to check out of this discussion.
I think there is merit to having as a frame of mind: “Would it be possible to make a machine/program that is very capable in regards to criteria x, y, etc, and optimizes for z?”.
I think it was good of you you to bring up Aumann’s agreement theorem. I haven’t looked into the specifics of that theorem, but broadly/roughly speaking I agree with it.
Partly because I was worried about follow-up comments that were kind of like “so you say you can prove it—well, why aren’t you doing it then?”.
And partly because I don’t make a strict distinction between “things I assume” and “things I have convinced myself of, or proved to myself, based on things I assume”. I do see there as sort of being a distinction along such lines, but I see it as blurry.
Something that is derivable from axioms is usually called a theorem.
If I am to be nitpicky, maybe you meant “derived” and not “derivable”.
From my perspective there is a lot of in-between between these two:
“we’ve proved this rigorously (with mathemathical proofs, or something like that) from axiomatic assumptions that pretty much all intelligent humans would agree with”
“we just assume this without reason, because it feels self-evident to us”
Like, I think there is a scale of sorts between those two.
I’ll give an extreme example:
Person A: “It would be technically possible to make a website that works the same way as Facebook, except that its GUI is red instead of blue.”
Person B: “Oh really, so have you proved that then, by doing it yourself?”
Person A: “No”
Person B: “Do you have a mathemathical proof that it’s possible”
Person A: “Not quite. But it’s clear that if you can make Facebook like it is now, you could just change the colors by changing some lines in the code.”
Person B: “That’s your proof? That’s just an assumption!”
Person A: “But it is clear. If you try to think of this in a more technical way, you will also realize this sooner or later.”
Person B: “What’s your principle here, that every program that isn’t proven as impossible is possible?”
Person A: “No, but I see very clearly that this program would be possible.”
Person B: “Oh, you see it very clearly? And yet, you can’t make it, or prove mathemathically that it should be possible.”
Person A: “Well, not quite. Most of what we call mathemathical proofs, are (from my point of view) a form of rigorous argumentation. I think I understand fairly well/rigorously why what I said is the case. Maybe I could argue for it in a way that is more rigorous/formal than I’ve done so far in our interaction, but that would take time (that I could spend on other things), and my guessis that even if I did, you wouldn’t look carefully at my argumentation and try hard to understand what I mean.”
The example I give here is extreme (in order to get across how the discussion feels to me, I make the thing they discuss into something much simpler). But from my perspective it is sort of similar to discussion in regards the The Orthogonality Thesis. Like, The Orthogonality Thesis is imprecisely stated, but I “see” quite clearly that some version of it is true. Similar to how I “see” that it would be possible to make a website that technically works like Facebook but is red instead of blue (even though—as I mentioned—that’s a much more extreme and straight-forward example).
As I understand you try to prove your point by analogy with humans. If humans can pursue somewhat any goal, machine could too. But while we agree that machine can have any level of intelligence, humans are in a quite narrow spectrum. Therefore your reasoning by analogy is invalid.
From my point of view, humans are machines (even if not typical machines). Or, well, some will say that by definition we are not—but that’s not so important really (“machine” is just a word). We are physical systems with certain mental properties, and therefore we are existence proofs of physical systems with those certain mental properties being possible.
machine can have any level of intelligence, humans are in a quite narrow spectrum
True. Although if I myself somehow could work/think a million times faster, I think I’d be superintelligent in terms of my capabilities. (If you are skeptical of that assessment, that’s fine—even if you are, maybe you believe it in regards to some humans.)
prove your point by analogy with humans. If humans can pursue somewhat any goal, machine could too.
It has not been my intention to imply that humans can pursue somewhat any goal :)
I meant to refer to the types of machines that would be technically possible for humans to make (even if we don’t want to so in practice, and shouldn’t want to). And when saying “technically possible”, I’m imagining “ideal” conditions (so it’s not the same as me saying we would be able to make such machines right now—only that it at least would be theoretically possible).
I do think it is supported by arguments/reasoning, so I don’t think of it as an “axiomatic” assumption.
A follow-up to that (not from you specifically) might be “what arguments?”. And—well, I think I pointed to some of my reasoning in various comments (some of them under deleted posts). Maybe I could have explained my thinking/perspective better (even if I wouldn’t be able to explain it in a way that’s universally compelling 🙃). But it’s not a trivial task to discuss these sorts of issues, and I’m trying to check out of this discussion.
I think there is merit to having as a frame of mind: “Would it be possible to make a machine/program that is very capable in regards to criteria x, y, etc, and optimizes for z?”.
I think it was good of you you to bring up Aumann’s agreement theorem. I haven’t looked into the specifics of that theorem, but broadly/roughly speaking I agree with it.
Why call it an assumption at all? Something that is derivable form axioms is usually called a theorem.
Partly because I was worried about follow-up comments that were kind of like “so you say you can prove it—well, why aren’t you doing it then?”.
And partly because I don’t make a strict distinction between “things I assume” and “things I have convinced myself of, or proved to myself, based on things I assume”. I do see there as sort of being a distinction along such lines, but I see it as blurry.
If I am to be nitpicky, maybe you meant “derived” and not “derivable”.
From my perspective there is a lot of in-between between these two:
“we’ve proved this rigorously (with mathemathical proofs, or something like that) from axiomatic assumptions that pretty much all intelligent humans would agree with”
“we just assume this without reason, because it feels self-evident to us”
Like, I think there is a scale of sorts between those two.
I’ll give an extreme example:
The example I give here is extreme (in order to get across how the discussion feels to me, I make the thing they discuss into something much simpler). But from my perspective it is sort of similar to discussion in regards the The Orthogonality Thesis. Like, The Orthogonality Thesis is imprecisely stated, but I “see” quite clearly that some version of it is true. Similar to how I “see” that it would be possible to make a website that technically works like Facebook but is red instead of blue (even though—as I mentioned—that’s a much more extreme and straight-forward example).
As I understand you try to prove your point by analogy with humans. If humans can pursue somewhat any goal, machine could too. But while we agree that machine can have any level of intelligence, humans are in a quite narrow spectrum. Therefore your reasoning by analogy is invalid.
From my point of view, humans are machines (even if not typical machines). Or, well, some will say that by definition we are not—but that’s not so important really (“machine” is just a word). We are physical systems with certain mental properties, and therefore we are existence proofs of physical systems with those certain mental properties being possible.
True. Although if I myself somehow could work/think a million times faster, I think I’d be superintelligent in terms of my capabilities. (If you are skeptical of that assessment, that’s fine—even if you are, maybe you believe it in regards to some humans.)
It has not been my intention to imply that humans can pursue somewhat any goal :)
I meant to refer to the types of machines that would be technically possible for humans to make (even if we don’t want to so in practice, and shouldn’t want to). And when saying “technically possible”, I’m imagining “ideal” conditions (so it’s not the same as me saying we would be able to make such machines right now—only that it at least would be theoretically possible).