Hedonic and desire theories are perfectly standard, we had plenty of people talking about them here, including myself. Jeffrey’s utility theory is explicitly meant to model (beliefs and) desires. Both are also often discussed in ethics, including over at the EA Forum. Daniel Kahneman has written about hedonic utility. To equate money with utility is a common simplification in many economic contexts, where expected utility is actually calculated, e.g. when talking about bets and gambles. Even though it isn’t held to be perfectly accurate. I didn’t encounter the reproduction and energy interpretations before, but they do make some sense.
cubefox
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
Interesting. This reminds me of a related thought I had: Why do models with differential equations work so often in physics but so rarely in other empirical sciences? Perhaps physics simply is “the differential equation science”.
Which is also related to the frequently expressed opinion that philosophy makes little progress because everything that gets developed enough to make significant progress splits off from philosophy. Because philosophy is “the study of ill-defined and intractable problems”.
Not saying that I think these views are accurate, though they do have some plausibility.
It seems to be only “deception” if the parent tries to conceal the fact that he or she is simplifying things.
There is also the related problem of intelligence being negatively correlated with fertility, which leads to a dysgenic trend. Even if preventing people below a certain level of intelligence to have children was realistically possible, it would make another problem more severe: the fertility of smarter people is far below replacement, leading to quickly shrinking populations. Though fertility is likely partially heritable, and would go up again after some generations, once the descendants of the (currently rare) high-fertility people start to dominate.
This seems to be a relatively balanced article which discusses serveral concepts of utility with a focus on their problems, while acknowledging some of their use cases. I don’t think the downvotes are justified.
That’s an interesting perspective. Only it doesn’t seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can’t make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the “expected” value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Maybe this is avoided by KV caching?
This is not how many decisions feel to me—many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it’s distinct in time from the action itself.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future.
See Spohn’s example about believing (“deciding”) you won’t wear shorts next winter:
One might object that we often do speak of probabilities for acts. For instance, I might say: “It’s very unlikely that I shall wear my shorts outdoors next winter.” But I do not think that such an utterance expresses a genuine probability for an act; rather I would construe this utterance as expressing that I find it very unlikely to get into a decision situation next winter in which it would be best to wear my shorts outdoors, i.e. that I find it very unlikely that it will be warmer than 20°C next winter, that someone will offer me DM 1000.- for wearing shorts outdoors, or that fashion suddenly will prescribe wearing shorts, etc. Besides, it is characteristic of such utterances that they refer only to acts which one has not yet to decide upon. As soon as I have to make up my mind whether to wear my shorts outdoors or not, my utterance is out of place.
Decision screens off thought from action. When you really make a decision, that is the end of the matter, and the actions to carry it out flow inexorably.
Yes, but that arguably means we only make decisions about which things to do now. Because we can’t force our future selves to follow through, to inexorably carry out something. See here:
Our past selves can’t simply force us to do certain things, the memory of a past “commitment” is only one factor that may influence our present decision making, but it doesn’t replace a decision. Otherwise, always when we “decide” to definitely do an unpleasant task tomorrow rather than today (“I do the dishes tomorrow, I swear!”), we would then tomorrow in fact always follow through with it, which isn’t at all the case.
I think in some cases an embedding approach produces better results than either a LLM or a simple keyword search, but I’m not sure how often. For a keyword search you have to know the “relevant” keywords in advance, whereas embeddings are a bit more forgiving. Though not as forgiving as LLMs. Which on the other hand can’t give you the sources and they may make things up, especially on information that doesn’t occur very often in the source data.
I think my previous questions were just too hard, it does work okay on simpler questions. Though then another question is whether text embeddings improve over keyword search or just an LLMs. They seem to be some middle ground between Google and ChatGPT.
Regarding data subsets: Recently there were some announcements of more efficient embedding models. Though I don’t know what the relevant parameters here are vs that OpenAI embedding model.
Since we can’t experience being dead, this wouldn’t really affect our anticipated future experiences in any way.
That’s a mistaken way of thinking about anticipated experience, see here:
evidence is balanced between making the observation and not making the observation, not between the observation and the observation of the negation.
I think GPT-4 fine-tuning at the time of ChatGPT release probably would have been about as good as GPT-3.5 fine-tuning actually was when ChatGPT was actually released. (Which wasn’t very good, e.g. jailbreaks were trivial and it always stuck to its previous answers even if a mistake was pointed out.)
There are also cognitive abilities, e.g. degree of intelligence.
Were OpenAI also, in theory, able to release sooner than they did, though?
Yes, I think they mentioned that GPT-4 finished training in summer, a few months before the launch of ChatGPT (which used a fine-tuned version of GPT-3.5).
That’s like dying in your sleep. Presumably you strongly don’t want it to happen, no matter your opinion on parallel worlds. Then dying in your sleep is bad because you don’t want it to happen. For the same reason vacuum decay is bad.
Exactly. That’s also why it’s bad for humanity to be replaced by AIs after we die: We don’t want it to happen.
It’s the old argument by Epicurus from his letter to Menoeceus:
The most dreadful of evils, death, is nothing to us, for when we exist, death is not present, and when death is present, we no longer exist.
I agree with the downvoters that the thesis of this post seems crazy. But aren’t entertainment and art superstimuli? Aren’t they forms of wireheading?