Yes. Sorry for the unintended ambiguity.
I see. No problem.
By the way, do you have an opinion on whether it’s good or bad that nobody in the AI community seems to employ FPE?
Yes. Sorry for the unintended ambiguity.
I see. No problem.
By the way, do you have an opinion on whether it’s good or bad that nobody in the AI community seems to employ FPE?
Certainly not the writings in AI discussed on LW. Probably not any other writings either.
Isn’t that what I said? I don’t get what you’re trying to say here.
ETA: Oh, are you responding to “perhaps they all employ FPE like it’s nothing”? At first, I thought you were responding to “I’m not well-read in AI”.
First, a couple general considerations:
How far are you in the book? If you’re stuck in Part II, I would recommend skipping to its last section. In my opinion, for somebody just starting out with his philosophy, the rest of that part simply isn’t insightful enough to justify how difficult it is to read. Save it for later, if at all.
Remember that he wrote it over 200 years ago. You’ll have to spend a lot of time getting fluent in his idiosyncratic 18th century English to really get what he’s saying. I find that sort of thing interesting, so it was actually a bonus for me. But if it would only be an obstacle for you, and you have a sufficiently high time preference for this kind of thing, you might be better off sticking to something that uses more familiar language.
Now, I want to say something about his philosophy.
He used a rare method that I call “first-person epistemology” (FPE). He didn’t start out from the usual premise: that he was but one mind in a physical universe. No, he began much deeper: from nothing but the immediately given. His world was simply a sequence of sensatons. For example, he didn’t directly apprehend 3D space. His senses conveyed only a sequence of 2D images on his visual field. If the term “3D space” is to mean anything, we must define it as referring to a particular kind of sequence of those 2D images. Our belief in 3D space pays rent by helping us predict what 2D image we’ll experience in what situation (perhaps among other things).
I think that this method (FPE) is extremely important, but nobody ever seems to employ it. I’ve only seen two people: him and Berkeley. Perhaps there have been others. I’m still looking. Based on some bio I read a while ago, Carnap seemed to fit the bill, but I don’t really know. I haven’t tried him yet. I can’t read German, and I hate reading translations. They usually suck. Anyway, I said that nobody seems to do Hume any justice. I’m not prepared to substantiate this, but I think that at least part of that is because they don’t understand his method (FPE). Nobody seems to get FPE, even though it seems totally obvious to me.
I think that the real progress to be made in AI is in understanding how our own consciousness works. If we can understand our own action, we can build an actor. Maybe even a better one. But how could we do that? I think that FPE is the way to go, and Hume did it best. Human Nature (or at least Book I and some of the parts of Books II and III) is an excellent monograph on how our consciousness works. But what do I know? I’m not well-read in AI. Perhaps they all employ FPE like it’s nothing, and they’re all well past Hume’s stuff. No idea. Maybe you could let me know? Any idea?
Anyway, a few more things:
If you’re having trouble with a section, I might be able to help. I generally know what he’s talking about, and I can usually translate his points into more modern wording.
I think that he’s extremely important, and I think that his treatise is his best work. He’s tied with Mises as my favorite writer, and his treatise is tied with Human Action as my favorite book. I don’t have any authority around here, but perhaps this means something to you.
If I wanted to be cocky, I would say that you probably wouldn’t get anything important from Hume that you wouldn’t get better and easier from my future posts anyway. I intend to try to convey a lot of important stuff to this community, and Hume is one of my two biggest influences.
But enough of all that stuff. Let’s get to the real question. What are your goals? Why do you think that you would be better off reading what you mentioned instead of Hume? What would they have that Hume wouldn’t? What exactly are you trying to accomplish by reading this stuff? After all, where to turn always comes down to where you’re trying to go. I can’t have an opinion on whether you’re wrong until you tell me what you’re attempting to do.
Damn. Should’ve known.
Why? Just wondering.
Everybody’s always citing Hume, but nobody ever seems to do him any justice. The OP is simply yet another example in this trend. I have no idea whether after reading the first paragraph of your post, Hume would agree that he couldn’t “bring himself to seriously doubt the content of his own subjective experience”, but I’m pretty sure that by the end of it, he would summarily reject your interpretation of his epistemology.
First of all, to make what I’m saying at least sound plausible, I need only give you one counter-example:
He referred to our propensity to ascribe a place to each sound as an illusion. According to Hume, a sound exists nowhere. Of course our natural reaction is to balk at those words, but that’s only because we so strongly associate the object that caused the sound (e.g., a TV), with the sound itself (i.e., your subjective experience we call “the sound it’s making”). But they’re clearly separate in our subjective experience, and unlike the TV, the sound has neither a shape nor a location. (1)
There he’s clearly not taking it for granted that we never get confused about the content of our subjective experience. He thinks that sounds exist nowhere, but he also recognizes that it’s more natural to get confused and not notice that. According to Hume, our natural tendency is to be wrong about this aspect of our subjective experience. Perhaps he would also agree that there are more cases like this?
But I haven’t proven you wrong yet. I’ve only tried to throw some doubt on your side. At this point, all I can do is sit back and ask you, “Can you cite me a significant number of instances where Hume contradicts the insight in your post, and perhaps by doing so, leads himself into error?” I mean, I have virtually no doubt that that’s an impossible task, but then again I’ll still be here if you try to shoot me down.
(1) From Human Nature. He starts off by saying this, and then moves on to saying this. If you want, ctrl-f to find exactly where those quotes came from. Perhaps I didn’t do justice to his insight about our subjective experience of sound and whatever, but it’s okay. I was only trying to show you an example of where he implied that we can be wrong about the content of our own subjective experience.
Very interesting suggestion. Thanks.
By the way, in that word language, I simply have a group of 4 grammatical particles, each referring to 1 of the 4 set operations (union, intersection, complement, and symmetric difference). That simplifies a few of the systems that we find in English or whatever. For example, we don’t find intersection only in the relationship between a noun and an adjective; we also find it in a bunch of other places. Here’s a list of a bunch of examples of where we see one of the set operations in English:
There’s a deer over there, and he looks worried. (intersection)
He’s a master cook. (intersection between “master” and “cook”)
The stars are the suns and the planets. (union)
Either there’s a deer over there, or I’m going crazy. (symmetric difference)
Everybody here except Phil is an idiot. (complement)
Besides when I’m doing economics, I’m an academic idiot. (complement)
A lake-side or ocean-side view in addition to a comfortable house is really all I want out of life. (intersection)
A light bulb is either on or off. (symmetric difference)
It’s both a table and a chair. (intersection)
Rocks that aren’t jagged won’t work for this. (complement)
A traditional diet coupled with a routine of good exercise will keep you healthy. (intersection)
A rock or stone will do. (union)
I might be wrong about some of those, so look at them carefully. And I’m sure there are a bunch of other examples. Maybe I missed a lot of the really convoluted ones because of how confusing they are. Either way, the point is that there are a bunch of random examples of the set operations in English. I think simply having a group of 4 grammatical particles for them would make the system a lot simpler and perhaps easier to learn and use.
Are there any natural language that do anything like this? Sure, there are probably a lot of natural languages that don’t make the distinction between nouns and adjectives. That distinction is nearly useless in a SVO language. We even see English speakers “violate” the noun/adjective system a lot. For example, something like this: “Hand me one of the longs.” If you work someplace where you constantly have to distinguish between the long and short version of a tool, you’ll probably hear that a lot. But are there are any natural languages that use a group of grammatical particles in this way? Or at the very least use one of them consistently?
Note: Perhaps I’m being too hard on the noun/adjective system in English. It’s often useless, but it serves a purpose that keeps it around. Two nouns next to each other (e.g., “forest people”) signifies that there’s some relation between the two sets, whereas an adjective in front of a noun signifies that the relation is specifically intersection. That seems to be the only point of the system. Maybe I’m missing something?
Another note: I’m not an expert on set theory. Maybe I’m abusing some of these terms. If anybody thinks that’s the case, I would appreciate the help.
I think that most of the potential lies in the “extra-radical possibilities”. The traditional linguistics descriptions (adjectives, nouns, prepositions, and so on) don’t seem to apply very well to any of my word languages. After all, they’re just a bunch of natural language components; they needn’t show up in an artificial language.
For example, in one of my word languages, there’s no distinction between nouns and adjectives (meaning that there aren’t any nouns and adjectives, I guess). To express the equivalent of the phrase “stupid man”, you simply put the word referring to the set of everything stupid, next to the one referring to the one of everything that’s a man, and put the word for set intersection in front of it. You get one of these two examples:
either: [set intersection] [set of everything stupid] [set of everything that’s a man]
or: [set intersection] [set of everything that’s a man] [set of everything stupid]
Of course that assumes that there’s no single word already referring to the intersection of those two sets, or that you just don’t want to use it, but whatever. I just meant to give it as an example.
I think that this system makes it more elegant, but it’s not a terribly big improvement. And it’s not very radical either. The more radical and useful stuff, I’m not ready to give an example of. This is just something simple. But it’s sufficient to say that you shouldn’t let the traditional descriptions constrain you. If you’re trying to make a better language, why limit yourself to just mixing and matching the old parts? There’s a world of opportunity out there, but you’re not gonna find much of it if you trap yourself in the “natural language paradigm”.
Sorry, I should have said that it’s not necessarily the same animal. The whole mountain of evidence concerns natural languages, right? Do you have any evidence that an artificial language with a self-segregating morphology and a simple sound structure would also go through the same changes?
So I’m not necessarily saying that the changes wouldn’t occur; I’m simply saying that we can’t reject out of hand the idea that we could build a system where they won’t occur, or at least build a system where they would occur in a useful way (rather than a way that would destroy its superior qualities). Where the system starts would determine its evolution; I see no reason why you couldn’t control that variable in such a way that it would be a stable system.
Thanks for the link. Yeah, that’s one of the ideas. It’s still in its infancy though, so I don’t have anything to show off.
The flow thing was just an example. The point was simply to illustrate that we shouldn’t reject out of hand the idea that an ordinary artificial language (as opposed to mathematical notation or something) could retain its regularity.
The point is simply that the evolution of the language directly depends on how it starts, which means that you could design in such a way that it drives its evolution in a useful way. Just because it would evolve doesn’t mean that it would lose its regularity. The flow thing is just one example of many. If it flows well, that’s simply one thing to not have to worry about.
However, there are thousands of human languages, which have all been changing their pronunciation for (at least) tens of thousands of years in all kinds of ways, and they keep changing as we speak. If such a happy fixed point existed, don’t you think that some of them would have already hit it by now?
No, I don’t. Evolution is always a hack of what came before it, whereas scrapping the whole thing and starting from scratch doesn’t suffer from that problem. I don’t need to hack an existing structure; I can build exactly what I want right now.
Here’s an excellent example of this general point: Self-segregating morphology. That’s the language construction term for a sound system where the divisions between all the components (sentences, prefixes, roots, suffixes, and so on) are immediately obvious and unambiguous. Without understanding anything about the speech, you know the syntactical structure.
That’s a pretty cool feature, right? It’s easy to build that into an artificial language, and it certainly makes everything easier. It would be an important part of having a stable sound system. The words wouldn’t interfere with each other, because they would be unambiguously started and terminated within a sound system where the end of every word can run smoothly against the start of any other word. If I were trying to make a stable sound system, the first thing that I would do is make the morphology self-segregating.
But if a self-segregating morphology is such a happy point, why hasn’t any natural language come to that point? Well, that should be pretty obvious. No hack could transform a whole language into a having a self-segregating morphology. Or at least I don’t know of such a hack. But even then, it’s trivially easy to make one if you start from scratch! Don’t you accept the idea that some things are easier to design than evolve (because perhaps the hacking process doesn’t have an obvious way to be useful throughout every step to get to the specific endpoint)?
The exact mechanisms of phonetic change are still unclear, but a whole mountain of evidence indicates that it’s an inevitable process.
That whole mountain of evidence concerns natural languages with irregular sound systems. A self-segregating morphology that flows super well would be a whole different animal.
Look at it this way: the fundamental question is whether your artificial language will use the capabilities of the human natural language hardware. If yes, then it will have to change to be compatible with this hardware, and will subsequently share all the essential properties of natural languages (which are by definition those that are compatible with this hardware, and whose subset happens to be spoken around the world). If not, then you’ll get a formalism that must be handled by the general computational circuits in the human brain, which means that its use will be very slow, difficult, and error-prone for humans, just like with programming languages and math formulas.
Per my points above, I still don’t see why using the capabilities of the natural language hardware would lead to it changing in all sorts of unpredictable ways, especially if it’s not used for anything but trying to reproduce your thought in their head, and if it’s not used by anybody but a specific group of people with a specific purpose in mind. I still imagine an engine well-built to drive its own evolution in a useful way, and avoid becoming an irregular mess.
If you build an artificial word language, you could make it in such a way that it would drive its own evolution in a useful way. A few examples:
If you make a rule available to derive a word easily, it would be less likely that the user would coin a new one.
If you also build a few other languages with a similar sound structure, you could make it super easy to coin new words without messing up the sound system.
If you make the sound system flow well enough, it would be unlikely that anybody would truncate the words to make it easier to pronounce or whatever.
I don’t understand how you could dismiss it out of hand that you could build a language that wouldn’t lose its superior qualities. There are a ton of different ways to make the engine defend itself in that regard. People mess with the English sound system only to make it flow better, and there’s no reason why you couldn’t just make an artificial language which already flows well enough.
Also, I’m not gonna try to convert the masses to my artificial language. In normal life, we spend a lot of our time using English to try to do something other than get the other person to think the same thought. We try to impress people, we try to get people to get us a glass of water, etc. I’m not interested in building a language for that kind of communication. All I’m interested in is building a language for what we try to do here on LW: reproduce our thought process in the other person’s head.
But what that means is that the “wild” needn’t be so wild. If the only people who use the artificial language are 1,000 people like you and me, I don’t see why we couldn’t retain its superior structure. I don’t see why I would take a perfectly good syntax and start messing with it. It would be specialized for one purpose: reproducing one’s thoughts in another’s head, especially for deep philosophical issues. We would probably use English in a lot of our posts! We would probably use a mix of English and the artificial language.
My response (“how are you so sure of all that stuff”) probably wasn’t very constructive, so I apologize. Perhaps I should have asked for an example of an artificial language that transformed into an irregular natural one. Since you probably would have mentioned Esperanto, I’ll respond to that. Basically, Esperanto was a partially regularized mix and match of a bunch of different natural language components. I have no interest in building a language like that.
Languages like Esperanto are still in the “natural language paradigm”; they’re basically just like idealized natural languages. But I have a different idea. If I build an artificial word language, its syntax won’t resemble any natural language that you’ve seen. At least not in that way. Actually, it would probably be more to the point to simply say that Esperanto was built for a much different reason. It’s a mix and match of a bunch of natural language components, and people use it like they use a natural language. It’s not surprising that it lost some of its regularity.
I’m getting pretty messy in this post, but I simply don’t have a concise response to this topic. Everywhere I go, people seem to have that same idea about artificial language. They say that we’re built for natural language, and either artificial language is impossible, or it would transform into natural language. I really just don’t know where people get that idea. How could we conceive of and build an artificial language, but at the same time be incapable of using it? That seems like a totally bizarre idea. Maybe I don’t understand it or something.
How are you so sure of all that stuff?
For a few years now, I’ve been working on a project to build an artificial language. I strongly suspect that the future of the kind of communication that goes on here will belong to an artificial language. English didn’t evolve for people like us. For our purpose, it’s a cumbersome piece of shit, rife with a bunch of fallacies built directly into its engine. And I assume it’s the same way with all the other ones. For us, they’re sick to the core.
But I should stress that I don’t think the future will belong to any kind of word language. English is a word language, Lojban is a word language, etc. Or at least I don’t think the whole future will belong to one. We must free ourselves from the word paradigm. When somebody says “language”, most people think words. But why? Why not think pictures? Why not diagrams? I think there’s a lot of potential in the idea of building a visual language. An artificial visual language. That’s one thing I’m working on.
Anyway, for the sake of your rationality, there’s a lot at stake here. A bad language doesn’t just fail to properly communicate to other people; it systematically corrupts its user. How often do you pick up where you left off in a thought process by remembering a bunch of words? All day every day? Maybe your motto is to work to “improve your rationality”? Perhaps you write down your thoughts so you can remember them later? And so on. It’s not just other people who can misinterpret what you say; it’s also your future self who can misinterpret what you present self says. That’s how everybody comes to believe such crazy stuff. Their later selves systematically misinterpret their sooner selves. They believe what they hear, but they hear not what they meant to say.
But isn’t being wary of coming off as lazy or unconstructive different than being afraid to make mistakes? The former seems desirable; the latter not so much.
I haven’t been here long enough to verify whether that norm (that it doesn’t require bravery to admit a mistake) really is in place, but assuming it is, I’m sure I’ll enjoy my stay!
Indeed, if there’s anything that could make or break your rationality in one shot, it’s whether you’re afraid to make mistakes. There’s no influence more corrupting than being afraid to screw up, and there’s nothing more liberating than being free of that fear.
I like this post, if only because it cuts through the standard confusion between feeling as if doing something particular would be morally wrong, and thinking that. The former is an indicator like a taste or a headache, and the latter is a thought process like deciding it would be counterproductive to eat another piece of candy.
I don’t know what the LW orthodoxy says on this issue; all I know is in general, it’s pretty common for people to equivocate between moral feelings and moral thoughts until they end up believing something totally crazy. Nobody seems to confuse how good they think a piece of cake would taste with their idea of whether it would be an otherwise productive thing to do, but everybody seems to do that with morality. What’s it like here?
Anyway, I agree. If we decide it would be morally wrong to eat meat, we would naturally prefer our feeling that a steak would really hit the spot right now to stop distracting us and depleting our precious willpower, right? Hold on. Let’s analyze this situation a little deeper. It’s not that you simply think it would ultimately be wrong to eat a piece of meat; it’s that you think that about killing the animal. Why don’t you want to eat the meat? Not for it’s own sake, but because that would kill the animal.
It’s an example where two conclusions contradict each other. At one moment, you feel revulsion at how you imagine somebody slaughtering a helpless cow, but at another one you feel desire for the taste of the steak. You’re torn. There’s a conflict of interests between your different selves from one moment to the next. One wants the steak no matter the price; the other considers the price way too steep. You might indulge in the steak for one minute, but regret it the next. Sounds like akrasia, right?
If you consciously decide it would be good to eat meat, the feeling of revulsion would be irrational; if you decide the opposite, the feeling of desire would. In the first case, you would want to self-modify to get rid of the useless revulsion, and in the second one, you would want to do so to get rid of the useless desire. Or would you? What if you end up changing your mind? Would it really be a good idea to nuke every indicator you disagree with? What about self-modifying so cake doesn’t taste so good anymore? Would you do that to get into better shape?
Note: I’m just trying to work through the same issue. Please forgive me if this is a bit of a wandering post; most of them will be.
But could you really have saved $100 by having decided to buy that same exact house except without that extra square foot?