Kripke’s essay on possible worlds makes it clear that there is nothing mysterious about possible worlds, they are simply states of information. Nothing hard.
Good for Kripke, then. I’ve often found that the major people in a field really do deserve their reputations, and I haven’t asserted that good philosophy is impossible, just that the field has failed to systematize it enough to make it worthwhile reading.
However, you do not solve an AI problem by calling something a “state of information”. Given that there’s only one real world, how are these “possible worlds” formulated as cognitive representations? I can’t write an AI until I know this.
However, can you give me an immediate and agreed-upon answer to the question, “Is there a possible world where zombies exist?” Considering the questions that follow from that, will make you realize how little of the structure of the “possible worlds” concept follows just from saying, “it is a state of information”.
Did Kripke mark his work as unfinished for failing to answer such questions? Or did he actually try to answer them? Now that would earn serious respect from me, and I might go out and start looking through Kripke’s stuff.
Robin: Well of course one standard response to such complaints is: “If you think you can do better, show us.” Not just better in a one-off way, but a better tradition that could continue itself. If you think you have done better and are being unfairly ignored, well then that is a different conversation.
Robin, my response here is mainly to philosophers who say, “We did all this work on metaethics, why are you ignoring us?” and my answer is: “The work you did is incommensurable with even the kind of philosophy that an AI researcher needs, which is cognitive philosophy and the reduction of mentalistic thinking to the non-mental; go read Gary Drescher for an example of the kind of mental labor I’m talking about. Some of you may have done such work, but that’s no help to me if I have to wade through all of philosophy to find it. Even your compilations of arguments are little help to me in actually solving AI problems, though when I need to explain something I will often check the Stanford Encyclopedia of Philosophy to see what the standard arguments are. And I finally observe that if you, as a philosopher, have not gone out and studied cognitive science and AI, then you really have no right to complain about people ‘ignoring relevant research’, and more importantly, you have no idea what I’m looking for.” This is my response to the philosophers who feel slighted by my travels through what they feel should be their territory, without much acknowledgment.
However, with all that said—if I was trying to build a tradition that would continue itself, these posts on Overcoming Bias would form a large part of how I did it, though I would be much more interested in making them sound more impressive (which includes formalizing/declarifying their contents and publishing them in journals) and I would assign a higher priority to e.g. writing up my timeless decision theory.
Good for Kripke, then. I’ve often found that the major people in a field really do deserve their reputations, and I haven’t asserted that good philosophy is impossible, just that the field has failed to systematize it enough to make it worthwhile reading.
However, you do not solve an AI problem by calling something a “state of information”. Given that there’s only one real world, how are these “possible worlds” formulated as cognitive representations? I can’t write an AI until I know this.
However, can you give me an immediate and agreed-upon answer to the question, “Is there a possible world where zombies exist?” Considering the questions that follow from that, will make you realize how little of the structure of the “possible worlds” concept follows just from saying, “it is a state of information”.
Did Kripke mark his work as unfinished for failing to answer such questions? Or did he actually try to answer them? Now that would earn serious respect from me, and I might go out and start looking through Kripke’s stuff.
Robin, my response here is mainly to philosophers who say, “We did all this work on metaethics, why are you ignoring us?” and my answer is: “The work you did is incommensurable with even the kind of philosophy that an AI researcher needs, which is cognitive philosophy and the reduction of mentalistic thinking to the non-mental; go read Gary Drescher for an example of the kind of mental labor I’m talking about. Some of you may have done such work, but that’s no help to me if I have to wade through all of philosophy to find it. Even your compilations of arguments are little help to me in actually solving AI problems, though when I need to explain something I will often check the Stanford Encyclopedia of Philosophy to see what the standard arguments are. And I finally observe that if you, as a philosopher, have not gone out and studied cognitive science and AI, then you really have no right to complain about people ‘ignoring relevant research’, and more importantly, you have no idea what I’m looking for.” This is my response to the philosophers who feel slighted by my travels through what they feel should be their territory, without much acknowledgment.
However, with all that said—if I was trying to build a tradition that would continue itself, these posts on Overcoming Bias would form a large part of how I did it, though I would be much more interested in making them sound more impressive (which includes formalizing/declarifying their contents and publishing them in journals) and I would assign a higher priority to e.g. writing up my timeless decision theory.