If the learning agent does not find any new knowledge why does it make Martha report having learned something new? Why not make her feel as if nothing changed?
sark
Edinburgh LW meetup, as usual
Why can’t 2+2=4 also be an observed fact? It’s just not a fact that is localizeable in time or space.
I think instead of universal vs. contingent, it’s better to think non-localizeable vs. localizeable. Or if you like, location-dependent vs. location-independent.
I like Voevodsky’s pragmatism. The universe/mathematics doesn’t explode when you find an inconsistency, only your current tools for determining mathematical truth. And that one might possibly locally patch up our tools for verifying proofs even in a globally inconsistent system.
I shall attend.
Our ancestors didn’t have the benefit of modern medicine, so some causes of chronic pain may have just killed them outright. On the other hand, not all of the things causing chronic pain today were an issue back then.
I was actually using pain as an analogy for suffering. I know that chronic pain simply wasn’t as much of an issue back then. Which was why I compared chronic pain to chronic suffering. If chronic suffering was as rare as chronic suffering back then (they both sure seem more common now), then there is no issue.
Are the current attention-allocational conflicts us modern people experience somehow more intractable? Do our built in heuristics which usually spring into action when noticing the suffering signal fail in such vexing attention-allocational conflicts?
Why do we need to have read your post, then employed this quite conscious and difficult process of trying to figure out the attention-allocational conflict? Why didn’t the suffering just do its job without us needing to apply theory to figure out its purpose and only then manage to resolve the conflict?
Fixing the problem requires removing chronic pain without blocking acute pain when it’s useful.
I guess you can look at it as a type I—type II error tradeoff. But you could also simply improve your cognitive algorithms which respond to a suffering signal.
No, I didn’t mean that the badness was bad and hence evolution would want it to go away. Acute suffering should be enough to make us focus on conflicts between our mental subsystems. It’s as with pain, acute pain leads you to flinch you away from danger, but chronic pain is quite useless and possibly maladaptive since it leads to needless brooding and wailing and distraction which does not at all address the underlying unsolveable problem and might well exacerbate it.
Suffering happens all too readily IMHO (or am I misjudging this?) for evolution to not have taken chronic attention-allocational conflict into account and come up with a fix.
To take an example for comparison, is the ratio of chronic to acute pain roughly equal to the ratio of chronic to acute attention-allocational conflict? My intuitions fail me here, but I seem to personally experience more chronic suffering than chronic pain. But then again I was diagnosed with mild depression before and hence not typical.
Edinburgh LW meetup, Saturday May 28, 2pm
It seems to me there that utility functions are not only equivalent up to affine transformations. Both utility functions and subjective probability distributions seem to take some relevant real world factor into account. And it seems you can move these representations between your utility function and your probability distribution while still giving exactly the same choice over all possible decisions.
In the case of discounting, you could for example represent uncertainty in a time-discounted utility function, or you do it with your probability distribution. You could even throw away your probability distribution and have your utility function take into account all subjective uncertainty.
At least I think thats possible. Have there been any formal analyses of this idea?
May I ask how the doubling time of the economy can suggest how we discount future utility?
One predictable way I have seen many rationalists (including myself) deceive themselves is by flooding their working memory and confusing themselves. They do this via nitpicking, pursuing arguments and counter-arguments in a rabbit hole depth-first fashion and neglecting other shallower ones, using long and grammatically complex sentences, etc. There are many ways. All you have to do is to ensure that you max out your working memory, which then makes you less able to self-monitor for biases.
How do you counter this? Do note that arguments are not systematically distributed wrt. their complexity. So it’s just best to stick to simple arguments which you can fully comprehend, and with some working memory capacity to spare.
Ediburgh LW Meetup Sunday 15th May, 2pm
It is easier to say new things than to reconcile those which have already been said.
Vauvenargues, Reflections and Maxims, 1746
You won’t be great just by sitting there of course, but I suspect great people wouldn’t be as great if they weren’t driven by an urge to achieve greatness to some extent for its own sake.
Great people also like to countersignal how their greatness was never something they had in mind, and that they are just truly dedicated to their art.
Edinburgh LW meetup, Saturday May 7, 2pm
OK. So as we have agreed, we will discuss our mini-presentations for next week’s (yes it’s weekly now) meetup here.
Mine is simple, it will be a summary on Schelling’s The Strategy of Conflict :)
What’s yours?
Thanks! That makes sense.
Fake-FAQs can be a method of misrepresenting arguments against your viewpoint. Like: “Check out all these silly arguments anti-consequentialists frequently use”. Just an example, I’m not saying Yvain is doing this.
Yup, that’s how reality does it as well with the principle of least action.