Fiat lux.
I mostly import old sequences from others, organise the Berlin AI alignment meetup, and participate in other Berlin events.
Fiat lux.
I mostly import old sequences from others, organise the Berlin AI alignment meetup, and participate in other Berlin events.
One argument for infra-Bayesianism is that Bayesianism has some disadvantages (like performing badly in nonrealizable settings). The existing examples in this post are decision theory examples: Bayesianism + CDT performing worse than infra-Bayesianism.
(And if Bayesianism + CDT perform badly, why not choose Bayesianism + UDT? Or is the advantage just that infra-Bayesianism is more beautiful?)
Are there non-decision-theory examples that highlight the advantage of infra-Bayesianism over Bayesianism? That is, cases in which infra-Bayesianism leads to “better beliefs” than Bayesianism, without necessarily taking actions? Or is the yardstick for beliefs here that the agent receives higher reward, and everything else is irrelevant?
There used to be an SSC/ACX group in Munich that I organized, maybe shoot a mail to ssc_munich@lists.posteo.de to see if anyone responds? (Might take a day or so since I will have to approve you as a sender).
If anything fails, there is a very lovely EA group in Munich that I participated in :-) Information here.
I’m interested.
Just some quick feedback on the “Continue Reading” feature: At the moment, when I read a post in the middle of a sequence, the next recommended post is at the beginning of the sequence, but I would like it to be the post after the last one I read in the sequence. Perhaps this is intentional, but I wouldn’t use the feature that way, since I already try to read the posts in sequence (sometimes without being logged in).
I am currently reading it, currently in the Quantum Physics sequence. I read it all here on LessWrong, I did not buy or read the book version. I sometimes skim through the comments a bit, but sadly, the threads have been unraveled a bit and it is hard to follow a conversation. I don’t remember any specific occasion where the comments enlightened me in a new way, though they are sometimes interesting. I doubt it is necessary to read them, though.
Plan 9 from Bell Labs comes to my mind (papers & manpages): By the creators of unix, tight integration of networks (better than other systems I have seen so far), UTF-8 all the way down, interesting concept with process-wide inherited namespaces.
It used up way too many weirdness points, though, and was fighting the old Worse is Better fight. It lost, and we are left with ugly and crufty unices today.
Another one that comes to mind is Project Xanadu. It was quite similar to the modern web, but a lot more polished and clean in design and concept. It probably failed because a really late delivery and by being too slow for the hardware at the time.
I guess that’s mostly the problem: ambitious projects use up a lot of weirdness points, and then fail to gain enough traction.
A project that will probably fall into the same category is Urbit. If you know a bit of computer science, the whitepaper is just pure delight. After page 20 I completely lost track. It’s fallen victim to a weirdness hyperinflation. It looks clean and sane, but I assign ~98% probability that its network is never going to have more than 50.000 users over the span of one month.
You were downvoted, but I think somebody should try to explain exactly why.
Both the idea and the terminology for “sages” is highly questionable. It evokes the idea of mysterious answers and to arguments from authority which are exactly what we would want to avoid.
Could you maybe clarify a bit more what you mean by the word “sage”? It seems like you conflate people who want to solve problems and win with people who want deep insights into the nature of meaning.
Possible (?) typo: “V is “Two arms and ten digits”“ could indeed be meant as “V is “Two arms and ten fingers””
Nevermind, didn’t know digits was another word for fingers.
I started tracking my productivity at the beginning of this month, writing a “master plan” in order to know at each moment exactly what I should do next (okay, not “exactly” exactly, but good enough for it to in theory fill more than one day).
I realized how bad it is. Which is excellent.
I’m not sure how much to include in the plan. At the moment it is so big that if I had ultimate self-restraint and would waste not one minute of the day, I would barely get it done. It seems like that’s okay, since I have sorted the activities after their priority and I have been improving since I started.
But does anybody have experience in trying out strict/less overwhel plans and observing if (and if yes, how much and in which direction) they influence success?
Pretty cool. I like the aleph for scholarship.
Two things:
It looks like evenness is missing from your post
I would represent the void with empty space as well, but add the caption “THE VOID” underneath
The meetup is now cancelled, sorry for the confusion.