Philosophy PhD student. Interested in ethics, metaethics, AI, EA, disagreement/erisology. Former username Ikaxas
Vaughn Papenhausen
Not a programmer, but I think one other reason for this is that at least in certain languages (I think interpreted languages, e.g. Python, is the relevant category here), you have to define a term before you can use it; the interpreter basically executes the code top-down instead of compiling it first, so it can’t just look later in the file to figure out what you mean. So
def brushTeeth(): putToothpasteOnToothbrush() ... def putToothpasteOnToothbrush(): ...
wouldn’t work, because you’re calling putToothpasteOnToothbrush() before you’ve defined it.
Fyi, the link to your site is broken for those viewing on greaterwrong.com; it’s interpreting “—a” as part of the link.
Maybe have a special “announcements” section on the frontpage?
The way I like to think about this is that the set of all possible thoughts is like a space that can be carved up into little territories and each of those territories marked with a word to give it a name.
Probably better to say something like “set of all possible concepts.” Words denote concepts, complete sentences denote thoughts.
I’m curious if you’re explicitly influenced by Quine for the final section, or if the resemblance is just coincidental.
Also, about that final section, you say that “words are grounded in our direct experience of what happens when we say a word.” While I was reading I kept wondering what you would say about the following alternative (though not mutually exclusive) hypothesis: “words are grounded in our experience of what happens when others say those words in our presence.” Why think the only thing that matters is what happens when we ourselves say a word?
Master: Now, is Foucault’s work the content you’re looking for, or merely a pointer.
Student: What… does that mean?
Master: Do you think that you think that the value of Foucault for you comes from the specific ideas he had, or in using him to even consider these two topics?
This put words to a feeling I’ve had a lot. Often I have some ideas, and use thinkers as a kind of handle to point to the ideas in my head (especially when I haven’t actually read the thinkers yet). The problem is that this fools me into thinking that the ideas are developed, either by me or by the thinkers. I like this idea of using the thinkers to notice topics, but then developing on the topics yourself, at least if the thinkers don’t take those topics in the direction you had in mind to take them.
On a different note, if you’re interested in Foucault’s methodology, some search terms would be “genealogy” and “conceptual engineering.” Here is a LW post on conceptual engineering, and here is a review of a recent book on the topic (which I believe engages with Foucault as well as Nietzsche, Hume, Bernard Williams, and maybe others; I haven’t actually read the full book yet, just this review). The book seems to be pretty directly about what you’re looking for: “history for finding out where our concepts and values come from, in order to question them.”
Yep, check out the Republic, I believe this is in book 5, or if it’s not in book 5 it’s in book 6.
The received wisdom in this community is that modifying one’s utility function is at least usually irrational. The classic source here is Steve Omohundro’s 2008 paper, “The Basic AI Drives,” and Nick Bostrom gives basically the same argument in Superintelligence, pp. 132-34. The argument is basically this: imagine you have an AI that is solely maximizing the number of paperclips that exist. Obviously, if it abandons that goal, there will be less paperclips than if it maintains that goal. And if it adds another goal, say maximizing staples, then this other goal will compete with the paperclip goal for resources, e.g. time, attention, steel, etc. So again, if it adds the staple goal, there will be less paperclips than if it doesn’t. So if it evaluates every option by h many paperclips result in expectation, then it will choose to maintain its paperclip goal unchanged. This argument isn’t mathematically rigorous, and allows that there may be special cases where changing one’s goal may be useful. But the thought is that, by default, changing one’s goal is detrimental from the perspective of one’s current goals.
As I said, though, there may be exceptions, at least for certain kinds of agents. Here’s an example. It seems as though, at least for humans, we’re more motivated to pursue our final goals directly than we are to pursue merely instrumental goals (which child do you think will read more: the one who intrinsically enjoys reading, or the one you pay $5 for every book they finish?). So, if a goal is particularly instrumentally useful, it may be useful to adopt it as a final goal in itself in order to increase your motivation to pursue it. For example, if your goal is to become a diplomat, but you find it extremely boring to read papers on foreign policy… well, first of all, I question why you want to become a diplomat if you’re not interested in foreign policy, but more importantly, you might be well-served to cultivate an intrinsic interest in foreign policy papers. This is a bit risky: if circumstances change so that it’s no longer as instrumentally useful, it may end up competing with your initial goals as described by the Bostrom/Omohundro argument. But it could work out that, at least some of the time, the expected value of changing your goal for this reason is positive.
Another paper to look at might be Steve Petersen’s paper, “Superintelligence as Superethical,” though I can’t summarize the argument for you off the top of my head.
I would think the metatheological fact you want to be realist about is something like “there is a fact of the matter about whether the God of Christianity exists.” “The God of Christianity doesn’t exist” strikes me as an object-level theological fact.
The metaethical nihilist usually makes the cut at claims that entail the existence of normative properties. That is, “pleasure is not good” is not a normative fact, as long as it isn’t read to entail that pleasure is bad. “Pleasure is not good” does not by itself entail the existence of any normative property.
Same
Really? I’m American and it sounds perfectly normal to me.
I think this post is extremely interesting, and on a very important topic. As I said elsethread, for this reason, I don’t think it should be in negative karma territory (and have strong-upvoted to try to counterbalance that).
On the object level, while there’s a frame of mind I can get into where I can see how this looks plausible to someone, I’m inclined to think that this post is more of a reductio of some set of unstated assumptions that lead to its conclusion, rather than a compelling argument for that conclusion. I don’t have the time right now to think about what exactly those unstated assumptions are or where they go wrong, but I think that would be important. When I get some more time, if I remember, I may come back and think some more about this.
I agree with this as well. I have strongly upvoted in an attempt to counterbalance this, but even so it is still in negative karma territory, which I don’t think it deserves.
A possible example of research film-study in a very literal sense: Andy Matuschak’s 2020-05-04 Note-writing livestream.
I would love it if more people did this sort of thing.
I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I’m not sure it even makes sense to say either universe is more “real”, since they’re literally identical in every way that matters (for the differences we can’t observe even in theory, I appeal to Newton’s flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation.
I see what you’re trying to get at here, but as stated I think this begs the question. You’re assuming here that the only ways universes could differ that would matter would be ways that have some impact on what we experience. People who accept the experience machine (let’s call them “non-experientialists) don’t agree. They (usually) think that whether we’re deceived, or whether our beliefs are actually true, can have some effect on how good our life is.
For example, consider two people whose lives are experientially identical, call them Ron and Edward. Ron lives in the real world, and has a wife and two children who love him, and whom he loves, and who are a big part of the reason he feels his life is going well. Edward lives in the experience machine. He has exactly the same experiences as Ron, and therefore also thinks he has a wife and children who love him. However, he doesn’t actually have a wife and children, just some experiences that make him think he has a wife and children (so of course “his wife and children” feel nothing for him, love or otherwise. Perhaps these experiences are created by simulations, but suppose the simulations are p-zombies who don’t feel anything). Non-experientialists would say that Ron’s life is better than Edward’s, because Edward is wrong about whether his wife and children love him (naturally, Edward would be devastated if he realized the situation he was in; it’s important to him that his wife and children love him, so if he found out they didn’t, he would be distraught). He won’t ever find this out, of course (since his life is experientially identical to Ron’s, and Ron will never find this out, since Ron doesn’t live in the experience machine). But the fact that, if he did, he would be distraught, and the fact that it’s true, seem to make a difference to how well his life goes, even though he will never actually find out. (Or at least, this is the kind of thing non-experientialists would say.)
(Note the difference between the way the experience machine is being used here and the way it’s normally used. Normally, the question is “would you plug in?” But here, the question is “are these two experientially-identical lives, one in the experience machine and one in the real world, equally as good as each other? Or is one better, if only ever-so-slightly?” See this paper for more discussion: Lin, “How to Use the Experience Machine”)
For a somewhat more realistic (though still pretty out-there) example, imagine Andy and Bob. Once again, Andy has a wife and children who love him. Bob also has a wife and children, and while they pretend to love him while he’s around, deep down his wife thinks he’s boring and his children think he’s tyrannical; they only put on a show so as not to hurt his feelings. Suppose Bob’s wife and children are good enough at pretending that they can fool him for his whole life (and don’t ever let on to anyone else who might let it slip). It seems like Bob’s life is actually pretty shitty, though he doesn’t know it.
Ultimately I’m not sure how I feel about these thought experiments. I can get the intuition that Edward and Bob’s lives are pretty bad, but if I imagine myself in their shoes, the intuition becomes much weaker (since, of course, if I were in their shoes, I wouldn’t know that “my” wife and children don’t love “me”). I’m not sure which of these intuitions, if either, is more trustworthy. But this is the kind of thing you have to contend with if you want to understand why people find the experience machine compelling.
Not sure if this is exactly what you’re looking for, but you could check out “Do Now” on the play store: https://play.google.com/store/apps/details?id=com.slamtastic.donow.app (no idea if it’s available for apple or not)
Two things I’ve come across. Haven’t used either much, but figured I’d mention them:
https://mermaid-js.github.io/mermaid-live-editor/ (This one is more of a markdown-style syntax than a drawing tool, but I’ve linked to the live editor)
Ah, I think the fact that there’s an image after the first point is causing the numbered list to be numbered 1,1,2,3.
My main concern with using an app like Evergreen Notes is that a hobby project built by one person seems like a fragile place to leave a part of my brain.
In that case you might like obsidian.md.
I found this one particularly impressive: https://m.youtube.com/watch?v=AHiu-EDJUx0
The use of “oops” at the end is spot on.
I think the point was that it’s a cause you don’t have to be a longtermist in order to care about. Saying it’s a “longtermist cause” can be interpreted either as saying that there are strong reasons for caring about it if you’re a longtermist, or that there are not strong reasons for caring about it if you’re not a longtermist. OP is disagreeing with the second of these (i.e. OP thinks there are strong reasons for caring about AI risk completely apart from longtermism).