Why not?
Oh. I see your point.
Why not?
Oh. I see your point.
Its the same question.
Yeah, if you use religious or faith baised terminology, it might trigger negative signals (downvotes). Though whether that is because the information you meant to convey was being disagreed with, or it’s because the statements themselves are actually overall more ambiguous, would be harder to distinguish.
Some kinds of careful resoning processes vibe with the community, and imop yours is that kind. Questioning each step separatetly on it’s merits, being sufficiently skeptical of premises leading to conclusions.
Anyways, back to the subject of f and inferring it’s features. We are definitely having trouble drawing out f out of the human brain in a systematic falsiable way.
Whether or not it is physically possible to infer it, or it’s features, or how it is constructed; i.e whether it possible at all, that subject seems a little uninteresting to me. Humans are perfectly capable of pulling made up functions out of their ass. I kind of feel like all the gold will go to first group of people who come up with processes of constructing f in coherent predictable ways. Such that different initial conditions, when iterated over the process, produce predictably similiar f.
We might then try observe such process throughout people’s lifetimes, and sort of guess that a version of the same process is going on in the human brain. But nothing about how that will develop is readily apparent to me. This is just my own imagination producing what seems like a plausible way forward.
Somehow, he has to populate the objective function whose maximum is what he will rationally try to do. How he ends up assigning those intrinsic values relies on methods of argument that are neither deductive nor observational.
In your opinion, does this relate in any way to the “lack of free will” arguments, like those alleged by Sam Harris? The whole: I can ask you about what your favourite movie is, and you will think of some. You will even try to justify your choices if asked about it, but ultimately you had no control of what movies popped into your head.
I feel like there are local optima. That getting to a different stable equilibrium involves having to “get worse” for a period of time. To question existing paradigms and assumptions. I.e. performing the update feels terrible, in that you get periodic glimpses of “oh, my current methodology is clearly inadequate”, which feels understandably crushing.
The “bad mental health/instability” is an interim step where you are trying to integrate your previous emotive models of certain situations, with newer models that appeal to you intelligently (i.e. feels like they ought to be the correct models). There is conflict when you try to integrate those, which is often meta discouraging.
If you’re curious about what could possibly be happening in the brain when that process occurs, I would recommend Mental Mountains by Scott A., or even better the whole Multiagent Models of Mind sequence.
No, that’s fair.
I was mostly having trouble consuming that 3-4-5 stage paradigm. Afraid that it’s a not a very practically useful map; i.e. doesn’t actually help you instrumentally navigate anywhere. But realized half way through composing that argument, that it’s very possible I’m just wrong. So decided to ask for an example of someone using this framework to actually successfully orient somewhere.
So the premise is that there are goals you can aim for. Could you give an example a goal you are currently aiming for?
Would it be okay to start some discussion about the David Chapman reading in the comments here?
Here’s some thoughts that I had while reading.
When Einstein produced general relativity, the success criteria was “it produces Newton’s laws of gravity as a special case approximation”. I.e. it had to produce the same models as have already been verified as accurate to a certain level of precision.
If more rationality knowledge produces depression and otherwise less stable equilibria within you, then that’s not a problem with rationality. Quoting from a lesswrong post: We need the word ‘rational’ in order to talk about cognitive algorithms or mental processes with the property “systematically increases map-territory correspondence” (epistemic rationality) or “systematically finds a better path to goals” (instrumental rationality).
A happy, stable productive you (or the previous stable version of you), is a necessary condition of using “more rationality”. If it comes out otherwise, then it’s not rationality. It’s some other confused phenomenon. Like a crisis of self-consistency. Which if it happens, and feels understandably painful, should eventually produce a better you at the end. If it doesn’t, then it actually wasn’t worth starting on the entire adventure, or stressing much about it.
Just to make sure I am not miscommunicating, “a little rationality can actually be worse for you” is totally a real phenomenon. I wouldn’t deny it.
I found the character sheet system to be very helpful. In two words its just a ranked list of “features”/goals you’re working towards, with a comment slot (it’s just a google sheet).
I could list personal improvements I was able to gain from the regular use of this tool, like weight loss/exercise habits etc., but that feels too much like bragging. Also, I can’t prove correlation vs causation.
The cohort system provides a cool social way to keep yourself accountable to yourself.
Dead link for “Why Most Published Research Findings Are False”. Googling just the url parameters yields this.
Did anyone else get so profoundly confused that they googled “Artificial Addition”? Only when I was half way though the bullet point list that it clicked that the whole post is a metaphor for common beliefs about AI. And that was on the second time reading, first time I gave up before that point.
I shall not make the mistake again!
You probably will. I think this biases thing doesn’t disappear even when you’re aware of it. It’s a generic human feature. I think self-critical awareness will always slip at the crucial moment; it’s important to remember this and acknowledge it. Big things vs small things as it were.
On my more pessimistic days I wonder if the camel has two humps.)
Link is dead. Is this the new link?
It seems less and less like a Prisoner’s Dilemma the more I think about it. Chances are, “oops” I messed up.
I still feel like the thing with famous names like Sam Harris, is that there is a “drag” force on his penetration on the culture nowadays because there is a bunch of history that has been (incorrectly) publicized. His name is associated with controversy; despite his best to avoid it.
I feel like you need to overcome a “barrier to entry” when listening to him. Unlike Eliezer, who’s public image (in my limited opinion) is actually new user friendly.
Somehow this all is meant to tie back to Prisoner’s Dilemmas. And in my head, it for some reason does. Perhaps I ought to prune that connection. Let me try my best to fully explain that link:
It’s a multi stage “chess game” in where you engage with the ideas that you hear from someone like Sam Harris; but there is doubt because there is a (misconception) of him saying “Muslims are bad” (a trivialization of the argument). What makes me think of a Prisoner’s Dilemma is this: you have to engage into “cooperate” or “don’t cooperate” game with the message based on nothing more or less then reputation of the source.
Sam doesn’t necessarily broadcast his basic values regularly that I can see. He’s a thoughtful, quite rational person; but I feel like he forgets that his image needs work. He needs to do qumbaiya as it were, once a while. To reaffirm his basic beliefs in life and it’s preciousness. (And I bet if I look, I’d find some, but it rarely percolates up on the feed).
Anyway. Chances are I am wrong on using the concept of Prisoner’s Dilemma here. Sorry.
Ah, makes sense.
I could be off base here. But a lot of cooperate vs non-cooperate classical stories often involve two parties who hate each other’s ideologies.
Could you then not say: “They have to first agree and/or fight a Prisoner’s Dilemma on an ideological field”?
Tom A. Apostol Calculus I && II (Haven’t fully read II). (Sorry don’t got 3 I guess)
So … a prisoner’s dilemma but on a meta level? Which then results in primary consensus.
Yep. Just have to get into the habit of it.
If you were dead in the future, you would be dead already. Because time travel is not ruled out in principle.
Danger is a fact about fact density and your degree of certainty. Stop saying things with the full confidence of being afraid. And start simply counting the evidence.
Go back a few years. Start there.