What makes you think so? The main reason I can see why the death of less than 100% of the population would stop us from getting back is if it’s followed by a natural event that finishes off the rest. However 25% of current humanity seems much more than enough to survive all natural disasters that are likely to happen in the following 10,000 years. The black death killed about half the population of Europe and it wasn’t enough even to destroy the pre-existing social institutions.
Squark
Meetup : Giving What We Can at LessWrong Tel Aviv
Hi Peter! I am Vadim, we met in a LW meetup in CFAR’s office last May.
You might be right that SPARC is important but I really want to hear from the horse’s mouth what is their strategy in this regard. I’m inclined to disagree with you regarding younger people, what makes you think so? Regardless of age I would guess establishing a continuous education programme would have much more impact than a two-week summer workshop. It’s not obvious what is the optimal distribution of resources (many two week workshops for many people or one long program for fewer people) but I haven’t seen such an analysis by CFAR.
The body of this worthy man died in August 2014, but his brain is preserved by Alcor. May a day come when he lives again and death is banished forever.
It feels like there is an implicit assumption in CFAR’s agenda that most of the important things are going to happen in one or two decades from now. Otherwise it would make sense to place more emphasis on creating educational programs for children where the long term impact can be larger (I think). Do you agree with this assessment? If so, how do you justify the short term assumption?
Meetup : Tel Aviv: Nick Lane’s Vital Question
On the other hand, articles and books can reach a much larger number of people (case in point: the Sequences). I would really want to see a more detailed explanation by CFAR of the rationale behind their strategy.
Thank you for writing this. Several questions.
How do you see CFAR in the long term? Are workshops going to remain in the center? Are you planning some entirely new approaches to promoting rationality?
How much do you plan to upscale? Are the workshops intended to produce a rationality elite or eventually become more of a mass phenomenon?
It seem possible that revolutionizing the school system would have much higher impact on rationality than providing workshops for adults. SPARC might be one step in this direction. What are you thoughts / plans regarding this approach?
Facebook event: https://www.facebook.com/events/796399390482188/
Meetup : Tel Aviv Game Night
Meetup : Game Night in Tel Aviv
!!! It is October 27, not 28 !!!
Also, it’s at 19:00
Sorry but it’s impossible to edit the post.
Meetup : Tel Aviv: Hardware Verification and FAI
First, like was mentioned elsewhere in the thread, bounded utility seems to produce unwanted effects, like we want utility to be linear in human lives and bounded utility seems to fail that.
This is not quite what happens. When you do UDT properly, the result is that the Tegmark level IV multiverse has finite capacity for human lives (when human lives are counted with 2^-{Kolomogorov complexity} weights, as they should). Therefore the “bare” utility function has some kind of diminishing returns but the “effective” utility function is roughly linear in human lives once you take their “measure of existence” into account.
I consider it highly likely that bounded utility is the correct solution.
If you have trouble finding the location, feel free to call me (Vadim) at 0542600919.
Meetup : Tel Aviv: Black Holes after Jacob Bekenstein
In order for the local interpretation of Sleeping Beauty to work, it’s true that the utility function has to assign utilities to impossible counterfactuals. I don’t think this is a problem...
It is a problem in the sense that there is no canonical way to assign these utilities in general.
In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utilities of local properties of the universe. This is what both allows local “consequences” in Savage’s theorem, and specifies those causally-inaccessible utilities.
True. As a side note, the Savage theorem is not quite the right thing here since it produces both probabilities and utilities while in our situations the utilities are already given.
This raises the question of whether, if you were given only the total utilities of the causally accessible histories of the universe, it would be “okay” to choose the inaccessible utilities arbitrarily such that the utility could be expressed in terms of local properties.
The problem is that different extensions produce complete different probabilities. For example, suppose U(AA) = 0, U(BB) = 1. We can decide U(AB)=U(BA)=0.5 in which case the probability of both copies is 50%. Or, we can decide U(AB)=0.7 and U(BA)=0.3 in which case the probability of the first copy is 30% and the probability of the second copy is 70%.
The ambiguity is avoided if each copy has an independent source of random because this way all of the counterfactuals are “legal.” However, as the example above shows, these probabilities depend on the utility function. So, even if we consider sleeping beauties with independent sources of random, the classical formulation of the problem is ambiguous since it doesn’t specify a utility function. Moreover, if all of the counterfactuals are legal then it might be the utility function doesn’t decompose into a linear combination over copies, in which case there is no probability assignment at all. This is why Everett branches have well defined probabilities but e.g. brain emulation clones don’t.
“Neural networks” vs. “Not neural networks” is a completely wrong way to look at the problem.
For one thing, there are very different algorithms lumped under the title “neural networks”. For example Boltzmann machines and feedforward networks are both called “neural networks” but IMO it’s more because it’s a fashionable name than because of actual similarity in how they work.
More importantly, the really significant distinction is making progress by trail and error vs. making progress by theoretical understanding. The goal of AI safety research should be shifting the balance towards the second option, since the second option is much more likely to yield results that are predictable and satisfy provable guarantees. In this context I believe MIRI correctly identified multiple important problems (logical uncertainty, decision theory, naturalized induction, Vingean reflection). I am mildly skeptical about the attempts to attack these problems using formal logic, but the approaches based on complexity theory and statistical learning theory that I’m pursuing seem completely compatible with various machine learning techniques including ANNs.