I’m not arguing you should take any position on those axi, I am just suggesting them as potential axi.
I think that falling on one extreme of the spectrum is equivalent to thinking the spectrum doesn’t exist—so yes, I guess people that are very aligned with a MIRI style position on AI wouldn’t even find the spectrum valid or useful. Much like, say, an atheist wouldn’t find a “how much you believe in the power of prayer” spectrum insightful or useful. This was not something I considered while originally writing this, but even with it in mind now, I can’t think of any way I could address it.
In-so-far as your object level arguments against the spectrums I present being valid and/or of one extreme being nonsensical, I can’t say that, right now, I could say anything of much value on those topics that you haven’t probably already considered yourself.
To address your later point, I doubt I fall into that particular fallacy. Rather, I’d say, I’m on the opposite spectrum where I’d consider most people and institutions to be beyond incompetent.
Hence why I’ve reached the conclusion that improving on rationally legible metrics seems low ROI, because otherwise rationlandia would have arisen and ushered prosperity and unimaginable power in a seemingly dumb world.
But I think that’s neither here nor there, as I said, I’m really not trying to argue my view here is correct, I’m trying to figure out why wide differences in view in both directions exist.
Much like, say, an atheist wouldn’t find a “how much you believe in the power of prayer” spectrum insightful or useful.
I don’t understand what you mean? I’m an atheist and am clearly at the bottom of the spectrum. If you disagree with my objections to your axis, can you e.g. clarify what you mean when you say some datum is “non-quantifiable” and why that would prevent an AI from being able to use it decisively better than humans?
There are several things at the extreme of non-quantifiable:
There’s “data” which can be examined in so much detail by human senses (which are intertwined with our thinking) that it would be inefficient to extract even with SF-level machinery. I gave as an example being able to feel another persons muscles and the tension within (hence the massage chair, but I agree smart-massage-chairs aren’t that advanced so it’s a poor analogy). Maybe a better example is “what you can tell from looking into someone’s eyes”
There’s data that is interwound with our internal experience. So, for example, I can’t tell you the complex matrix of muscular tension I feel, but I can analyze my body and almost subconsciously decide “I need to stretch my left leg”. Similarly, I might not be able to tell you what the perfect sauce is for me or what patterns of activity it triggers in my brain, or how its molecules bind to my taste buds, but I can keep tasting the sauce and adding stuff and conclude “voila, this is perfect”
There are things beyond data that one can never quantify, like revelations from god or querying the global consciousness or whatever
I myself am pretty convinced there are a lot of things falling under <1> and <2> that are practically impossible to quantify (not fundamentally or theoretically impossible), even provided 1000x better camera, piezo, etc sensors and even provided 0.x nm transistors making perfect use of all 3 dimensions in their packing (so, something like 1000x better GPUs).
I think <3> is false and mainly make fun of the people that believe in it (I’ve taken enough psychedelics not to be able to say this conclusively, but still). However, I still think it will be a generator of disagreement with AI alignment for the vast majority of people.
I can see very good arguments that both 1 and 2 are uncritical and not that hard to quantify, and obviously that 3 is a giant hoax. Alas, my positions have remained unchanged on those, hence why I said a discussion around them may be unproductive.
I’m not arguing you should take any position on those axi, I am just suggesting them as potential axi.
I think that falling on one extreme of the spectrum is equivalent to thinking the spectrum doesn’t exist—so yes, I guess people that are very aligned with a MIRI style position on AI wouldn’t even find the spectrum valid or useful. Much like, say, an atheist wouldn’t find a “how much you believe in the power of prayer” spectrum insightful or useful. This was not something I considered while originally writing this, but even with it in mind now, I can’t think of any way I could address it.
In-so-far as your object level arguments against the spectrums I present being valid and/or of one extreme being nonsensical, I can’t say that, right now, I could say anything of much value on those topics that you haven’t probably already considered yourself.
To address your later point, I doubt I fall into that particular fallacy. Rather, I’d say, I’m on the opposite spectrum where I’d consider most people and institutions to be beyond incompetent.
Hence why I’ve reached the conclusion that improving on rationally legible metrics seems low ROI, because otherwise rationlandia would have arisen and ushered prosperity and unimaginable power in a seemingly dumb world.
But I think that’s neither here nor there, as I said, I’m really not trying to argue my view here is correct, I’m trying to figure out why wide differences in view in both directions exist.
I don’t understand what you mean? I’m an atheist and am clearly at the bottom of the spectrum. If you disagree with my objections to your axis, can you e.g. clarify what you mean when you say some datum is “non-quantifiable” and why that would prevent an AI from being able to use it decisively better than humans?
There are several things at the extreme of non-quantifiable:
There’s “data” which can be examined in so much detail by human senses (which are intertwined with our thinking) that it would be inefficient to extract even with SF-level machinery. I gave as an example being able to feel another persons muscles and the tension within (hence the massage chair, but I agree smart-massage-chairs aren’t that advanced so it’s a poor analogy). Maybe a better example is “what you can tell from looking into someone’s eyes”
There’s data that is interwound with our internal experience. So, for example, I can’t tell you the complex matrix of muscular tension I feel, but I can analyze my body and almost subconsciously decide “I need to stretch my left leg”. Similarly, I might not be able to tell you what the perfect sauce is for me or what patterns of activity it triggers in my brain, or how its molecules bind to my taste buds, but I can keep tasting the sauce and adding stuff and conclude “voila, this is perfect”
There are things beyond data that one can never quantify, like revelations from god or querying the global consciousness or whatever
I myself am pretty convinced there are a lot of things falling under <1> and <2> that are practically impossible to quantify (not fundamentally or theoretically impossible), even provided 1000x better camera, piezo, etc sensors and even provided 0.x nm transistors making perfect use of all 3 dimensions in their packing (so, something like 1000x better GPUs).
I think <3> is false and mainly make fun of the people that believe in it (I’ve taken enough psychedelics not to be able to say this conclusively, but still). However, I still think it will be a generator of disagreement with AI alignment for the vast majority of people.
I can see very good arguments that both 1 and 2 are uncritical and not that hard to quantify, and obviously that 3 is a giant hoax. Alas, my positions have remained unchanged on those, hence why I said a discussion around them may be unproductive.