You mean this substance? https://en.wikipedia.org/wiki/Mesembrine
Do you have a recommended brand, or places to read more about it?
You mean this substance? https://en.wikipedia.org/wiki/Mesembrine
Do you have a recommended brand, or places to read more about it?
I would love to hear the principal’s take on your conversation.
Interesting, I can see why that would be a feature. I don’t mind the taste at all actually. Before, I had some of their smaller citrus flavored kind, and they dissolved super quick and made me a little nauseous. I can see these ones being better in that respect.
I ordered some of the Life Extension lozenges you said you were using; they are very large and take a long time to dissolve. It’s not super unpleasant or anything, I’m just wondering if you would count this against them?
Thank you for your extended engagement on this! I understand your point of view much better now.
Oh, I think I get what you’re asking now. Within-lifetime learning is a process that includes something like a training process for the brain, where we learn to do things that feel good (a kind of training reward). That’s what you’re asking about if I understand correctly?
I would say no, we aren’t schemers relative to this process, because we don’t gain power by succeeding at it. I agree this is subtle and confusing question, and I don’t know if Joe Carlsmith would agree, but the subtlety to me seems to belong more to the nuances of the situation & analogy and not to the imprecision of the definition.
(Ordinary mental development includes something like a training process, but it also includes other stuff more analogous to building out a blueprint, so I wouldn’t overall consider it a kind of training process.)
If you’re talking about this report, it looks to me like it does contain a clear definition of “schemer” in section 1.1.3, pg. 25:
It’s easy to see why terminally valuing reward-on-the-episode would lead to training-gaming (since training-gaming just is: optimizing for reward-on-the-episode). But what about instrumental training-gaming? Why would reward-on-the-episode be a good instrumental goal?
In principle, this could happen in various ways. Maybe, for example, the AI wants the humans who designed it to get raises, and it knows that getting high reward on the episode will cause this, so it training-games for this reason.
The most common story, though, is that getting reward-on-the-episode is a good instrumental strategy for getting power—either for the AI itself, or for some other AIs (and power is useful for a very wide variety of goals). I’ll call AIs that are training-gaming for this reason “power-motivated instrumental training-gamers,” or “schemers” for short.
By this definition, a human would be considered a schemer if they gamed something analogous to a training process in order to gain power. For example, if a company tries to instill loyalty in its employees, an employee who professes loyalty insincerely as a means to a promotion would be considered a schemer (as I understand it).
I think this post would be a lot stronger with concrete examples of these terms being applied in problematic ways. A term being vague is only a problem if it creates some kind of miscommunication, confused conceptualization, or opportunity for strategic ambiguity. I’m willing to believe these terms could pose these problems in certain contexts, but this is hard to evaluate in the abstract without concrete cases where they posed a problem.
I’m not sure I can come up with a distinguishing principle here, but I feel like some but not all unpleasant emotions feel similar to physical pain, such that I would call them a kind of pain (“emotional pain”), and cringing at a bad joke can be painful in this way.
More reasons: people wear sunglasses when they’re doing fun things outdoors like going to the beach or vacationing so it’s associated with that, and also sometimes just hiding part of a picture can cause your brain to fill it in with a more attractive completion than is likely.
This probably does help capitalize AI companies a little bit, demand for call options will create demand for the underlying. This is probably a relatively small effect (?), but I’m not confident in my ability to estimate this at all.
I’m confused about what you mean & how it relates to what I said.
It’s totally wrong that you can’t argue against someone who says “I don’t know”, you argue against them by showing how your model fits the data and how any plausible competing model either doesn’t fit or shares the salient features of yours. It’s bizarre to describe “I don’t know” as “garbage” in general, because it is the correct stance to take when neither your prior nor evidence sufficiently constrain the distribution of plausibilities. Paul obviously didn’t posit an “unobserved kindness force” because he was specifically describing the observation that humans are kind. I think Paul and Nate had a very productive disagreement in that thread and this seems like a wildly reductive mischaracterization of it.
I don’t think this is accurate, I think most philosophy is done under motivated reasoning but is not straightforwardly about signaling group membership
Hi, any updates on how this worked out? Considering trying this...
This is the most interesting answer I’ve ever gotten to this line of questioning. I will think it over!
What observation could demonstrate that this code indeed corresponded to the metaphysical important sense of continuity across time? What would the difference be between a world where it did or it didn’t?
Say there is a soul. We inspect a teleportation process, and we find that, just like your body and brain, the soul disappears on the transmitter pad, and an identical soul appears on the receiver. What would this tell you that you don’t already know?
What, in principle, could demonstrate that two souls are in fact the same soul across time?
It is epistemic relativism.
Question 1 and 3 are explicitly about values, so I don’t think they do amount to epistemic relativism.
There seems to be a genuine question about what happens and which rules govern it, and you are trying to sidestep it by saying “whatever happens—happens”.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn’t. I’d like to know how does our universe work.
There seems to be a genuine question here, but it is not at all clear that there actually is one. It is pretty hard to characterize what this question amounts to, i.e. what the difference would be between two worlds where the question has different answers. I take OP to be espousing the view that the question isn’t meaningful for this reason (though I do think they could have laid this out more clearly).
Cool, thanks!