I also feel somewhat bad about making a critique, as I don’t have money to fund you even if you satisfactorily responded to it.
But someone else might!
Have you tried talking to Eliezer or Marcello?
I don’t think I’ve ever talked with Marcello. Eliezer I’ve talked with many times but not so much in recent years. My relationship to existing Friendliness theory is that I agree with the overall strategy proposed; for the unsolved subproblems, I have placeholder ideas which I periodically revise; but I’m quite sure that significant portions of it will have to be grounded in a fundamental, subcomputational ontology, because substrate matters for consciousness, and even if an FAI is unconscious, its concept of consciousness needs to be correct.
Talking to Eliezer about these issues is something I save for the future, e.g. after the paper-in-progress is written, because only then will everything about my position be set out clearly and rigorously. But for now, neither of us has a set of ideas in the public domain which is sufficiently exact for a significant exchange to occur.
Just begun to consider? This doesn’t inspire confidence.
I figured that to make this sales pitch, I had better have a line on the computational side of CEV and not just the ontology. Also, the approach of economic doomsday has made me think as hard and fast as possible, since I may not get another chance for some time, and that was the best distillation of my existing ideas I could come up with. CEV involves reflection and computationally difficult tasks, and Mulmuley’s “flip” is a strategy for dealing with this in the context of P vs NP. It is definitely just a placeholder idea, but it has enough relevant complexity that it should be a good starting point if approached in a spirit of critical engagement. A good starting point for me, that is—at this stage I wouldn’t say that everyone else, or even anyone else, should bother with this perspective. To mention it is simply to say that I have a line of thought to pursue.
My metaphysical intuition finds it rather unlikely that string theory would be important for volition extrapolation … I’m skeptical that anyone could have ideas about what the true ontology is that bear on CEV.
Fundamental physical ontology matters for ontology of consciousness, because exact states of consciousness can’t be coarse-grained physical states, unless you want to be a property dualist with a one-to-many mapping. That is an assertion, it has to be backed up with an argument which I won’t repeat right away, but I state it so you can see the relevance. The unconscious information processing of the brain may be understood in functional and coarse-grained terms, but substance (in the most abstracted sense—the “being” of a “thing”), not just causal structure, must matter for conscious states themselves. This is why I take seriously the idea that there is a “Cartesian theater” and that physically it will be something very concrete—see my remark there about entangled excitons. To further understand how this single physical object can be identified with the conscious mind, we would need to understand its exact microphysical constitution, and for that we need string theory or some other fundamental theory—that’s the only place where you’ll find out what an electron actually is. (Then you would need to map the physically described states of this object onto the conscious states.)
The more computer-sciencey issues you mention, like self-representation and description-length epistemology, are also part of the problem, but they will have to be grounded in a deeper ontology …
Have you read and understood Tegmark’s papers about the MUH? Have you read and understood Paul Almond?
… than you can find in Tegmark or Almond. Reifying mathematical objects is not good enough, and neither is a systems hierarchy approach. Ironically, these two thinkers exemplify the two poles of the old opposition between property and substance, universal and particular, mathematics and physics (etc), which is precisely the sort of perennial ontological issue that will need to be dealt with.
When someone says that they have ideas about how CEV should work, I think, ‘this person just doesn’t understand how impossible CEV is’.
It’s the “functionalist” or “computer-science” part of CEV which I think should be solvable just through hard work and systematic labor. For example, inferring the schematic human decision procedure from data about the brain. That’s an exercise in using one finite-state machine (the AI) to infer a particular property of another class of finite-state machines (human brains). That shouldn’t require ontological innovation, just advanced mathematics.
Finding the right ontological grounding of everything is a harder problem from the perspective of method, because it’s not a problem that we already know how to solve, but it should also be a simpler (less laborious) problem, because we have so much of the “data” already—conscious experience is right there in front of us at every moment, and then from science we have endless third-person data on physics and neuroscience. So getting this part right is going to be something like finding the right perspective on a few very fundamental facts.
I therefore agree that CEV is difficult, but perhaps I analyse the difficulty in a different way to you.
I’m not really sure how to soften the blow… but I thought that such a comment needed to be made. I’m sorry.
It didn’t bother me at all. I have far more pressing matters to worry about in my physical life. For some reason I found it grimly amusing to see the post being voted down, down, down… Didn’t Bill Gates say, “640 karma ought to be enough for anybody”? Something like that. Anyway, you did me a favor by replying at such length.
But someone else might!
I don’t think I’ve ever talked with Marcello. Eliezer I’ve talked with many times but not so much in recent years. My relationship to existing Friendliness theory is that I agree with the overall strategy proposed; for the unsolved subproblems, I have placeholder ideas which I periodically revise; but I’m quite sure that significant portions of it will have to be grounded in a fundamental, subcomputational ontology, because substrate matters for consciousness, and even if an FAI is unconscious, its concept of consciousness needs to be correct.
Talking to Eliezer about these issues is something I save for the future, e.g. after the paper-in-progress is written, because only then will everything about my position be set out clearly and rigorously. But for now, neither of us has a set of ideas in the public domain which is sufficiently exact for a significant exchange to occur.
I figured that to make this sales pitch, I had better have a line on the computational side of CEV and not just the ontology. Also, the approach of economic doomsday has made me think as hard and fast as possible, since I may not get another chance for some time, and that was the best distillation of my existing ideas I could come up with. CEV involves reflection and computationally difficult tasks, and Mulmuley’s “flip” is a strategy for dealing with this in the context of P vs NP. It is definitely just a placeholder idea, but it has enough relevant complexity that it should be a good starting point if approached in a spirit of critical engagement. A good starting point for me, that is—at this stage I wouldn’t say that everyone else, or even anyone else, should bother with this perspective. To mention it is simply to say that I have a line of thought to pursue.
Fundamental physical ontology matters for ontology of consciousness, because exact states of consciousness can’t be coarse-grained physical states, unless you want to be a property dualist with a one-to-many mapping. That is an assertion, it has to be backed up with an argument which I won’t repeat right away, but I state it so you can see the relevance. The unconscious information processing of the brain may be understood in functional and coarse-grained terms, but substance (in the most abstracted sense—the “being” of a “thing”), not just causal structure, must matter for conscious states themselves. This is why I take seriously the idea that there is a “Cartesian theater” and that physically it will be something very concrete—see my remark there about entangled excitons. To further understand how this single physical object can be identified with the conscious mind, we would need to understand its exact microphysical constitution, and for that we need string theory or some other fundamental theory—that’s the only place where you’ll find out what an electron actually is. (Then you would need to map the physically described states of this object onto the conscious states.)
The more computer-sciencey issues you mention, like self-representation and description-length epistemology, are also part of the problem, but they will have to be grounded in a deeper ontology …
… than you can find in Tegmark or Almond. Reifying mathematical objects is not good enough, and neither is a systems hierarchy approach. Ironically, these two thinkers exemplify the two poles of the old opposition between property and substance, universal and particular, mathematics and physics (etc), which is precisely the sort of perennial ontological issue that will need to be dealt with.
It’s the “functionalist” or “computer-science” part of CEV which I think should be solvable just through hard work and systematic labor. For example, inferring the schematic human decision procedure from data about the brain. That’s an exercise in using one finite-state machine (the AI) to infer a particular property of another class of finite-state machines (human brains). That shouldn’t require ontological innovation, just advanced mathematics.
Finding the right ontological grounding of everything is a harder problem from the perspective of method, because it’s not a problem that we already know how to solve, but it should also be a simpler (less laborious) problem, because we have so much of the “data” already—conscious experience is right there in front of us at every moment, and then from science we have endless third-person data on physics and neuroscience. So getting this part right is going to be something like finding the right perspective on a few very fundamental facts.
I therefore agree that CEV is difficult, but perhaps I analyse the difficulty in a different way to you.
It didn’t bother me at all. I have far more pressing matters to worry about in my physical life. For some reason I found it grimly amusing to see the post being voted down, down, down… Didn’t Bill Gates say, “640 karma ought to be enough for anybody”? Something like that. Anyway, you did me a favor by replying at such length.