Looks like SSC meetups are still ongoing: https://www.lesswrong.com/events/ifsZbNmHwxhCm7F4n/slate-star-codex-meetup?commentId=ZccQoDDQY2skHDsAY#ZccQoDDQY2skHDsAY
goose000
This whole time? Man, I haven’t been looking hard enough. What’s the algorithm, 2d Saturdays at 1900?
Ahh, I think I did not think through what “rationality enhancement” might mean; perhaps my own recent search and the AI context of Yudkowsky’s original intent skewed me a little. I was thinking of something like “understanding and applying concepts of rationality” in a way that might include “anticipating misaligned AI” or “anticipating AI-human feedback responses”.
I like the way you’ve framed what’s probably the useful question. I’ll need to think about that a bit more.
Cool, thanks for sharing.
I posted about my academic research interest here, do you know their research well enough to give input on whether my interests would be compatible? I would love to find a way to do my PhD in Europe, but especially Germany.
Cool, that sounds like a pretty useful combination.
I’d love to. The soonest I’d be available in August would be at the end of the month. I’m sure we can find somewhere public that would work. What will you be studying?
A few observations.
First, it seems likely that the increase in positivity can be explained by fewer precautionary tests: fewer people are getting tested “just to be sure”, fewer people are being required by work/travel/etc. to get tested. Therefore fewer negative tests.
Second, it seems likely to me that the “93%, 93%, 91%” numbers are calculated independently from each other. I.e. 93% less likely to contract than unvaccinated, 93% less likely to hospitalize than unvaccinated, and the vaccinated group was 91% less likely to die than the unvaccinated group. So with alpha, all probabilities were reduced ~uniformly. Now consider a variant (delta) where the vaccine is not as effective at reducing symptoms of any level, but is still ~as effective at preventing hospitalizations and deaths. This would decrease the likelihood of the vaccine preventing a positive test or symptoms, while not changing the hospitalization/deaths numbers much. This makes sense in my head, but perhaps there’s something I’m missing?
Finally, a typo that tripped me up a bit:
We should also look at case counts in Israel. On June 18 they had 1.92 cases per million, right before things started rising, on June 14 it was 65.09, for R0 = 1.97. From previous data, we can presume that when Delta was a very small portion of Israeli cases, the control system adjusted things to something like R0 = 1, so we’ll keep that number in mind.
The second “June” should be “July”, as in “July 14”. (Small nitpick, I know, but it took me a minute to work out, so I figured I’d share.)
I’ve started formalizing my research proposal, so I now have:
I intend to use computational game theory, system modeling, cognitive science, causal inference, and operations research methods to explore the ways in which AI systems can produce unintended consequences and develop better methods to anticipate outer alignment failures.Can anyone point me to existing university research along these lines? I’ve made some progress after finding this thread, and I’m now planning to contact FHI about their Research Scholar’s Programme, but I’m still finding it a little time-consuming to try to match specific ongoing research with a given University or professor, so if anyone can point me to any other university programs (or professors to contact) which would fit well with my interests, that would be super helpful.
Wait, isn’t that an example of efficiency of scale being dependent on investment? You have to get a 1-foot rope and scissors, but once you have, you can create two 1⁄2 foot ropes? I think the “given a 1-foot rope” is doing more work than you realize, because when I try to apply your example to the world above, I keep getting hung up on “but in the imaginary world above, when we account for economy of scale, if you just needed one 1⁄2 foot rope, you would just create a 1⁄2 foot rope, and that would take you 1⁄2 the time as creating 1 foot of rope.” And for The David, I feel like “sure, but that doesn’t explain why someone wouldn’t just carve their own David if they wanted one”. I think I’m bypassing some of the issue here, but I’m not entirely sure what it is.
It does, however, bring up another interesting reason for trade (and this may be part of how investment can be independent from efficiency of scale): shared resources. If a pair of scissors does not scale according to how often I use them, and I only use them once per day, I can increase efficiency/decrease required investment by trading their use so others can use them when I’m not. This applies to the the David as such: utility gained from the David is not zero sum, multiple people can utility from it without decreasing the utility the others gain; therefore it does not make sense for everyone to carve their own. So any time a resource or product produces non-zero sum benefits if it exists, we have a reason for it’s use to be traded/trade to be involved in sharing it.
Applying this, if 5 people each carve a statue and put them in a sculpture garden in exchange for access to the garden, they can each enjoy five statues (alternatively, they could collaborate to build the statue in 1/5th the time and share in the enjoyment of it).
Not sure this is what you were getting at, but I think I’ve talked myself into thinking that when investment has independence from efficiency of scale it’s because of the non-zero sum nature of some shared resources.
Hmm. I feel like it’s relevant that your example relies on trade, which we’re trying to eliminate. Therefore, if all of the other reasons for trade go away, this example would be irrelevant.
But can we recreate it elsewhere? Perhaps there is some task which is time sensitive, but cannot be done by one person (in their remaining marginal time) at a speed which does not decrease marginal gains. Information sharing comes to mind, but that seems to have already been accomplished by the society outlined above.
Yeah, I think we’re in agreement. I can’t think why there would ever be a minimum, except to exceed the break-even point on fixed costs.
Any chance this will be resuming any time soon?
Some interesting responses here, and although I didn’t read through all of them, I read enough to get a sense of the kind of approach most people seem to be taking here.
As someone who was where you are now about five years ago, I will share the way I think about it, especially since it seems quite distinct from the approach most people are taking here.
Short answer (and hot take for this crowd): it’s not. The kind of morality I believed in as a Christian (an objective truth about things being Right and Wrong) is not possible without a god.
The illusion of such a world, however, is very possible, and in fact predicted by some pretty prominent evolutionary psychology theories of behavior. If you have not read The Selfish Gene, I highly recommend it as Dawkins’ treatment of this issue is the best I’ve heard and the (I’m pretty sure) origin of every other good explanation I’ve heard from elsewhere.
In essence, the illusion of a world with an objective moral reality is the evolutionary response to the cooperation problem associated with repeated games where actors have the ability to hold a grudge: for any single game, the optimal strategy is to pursue the course which grants the maximum individual reward (the defect strategy in the prisoner’s dilemma), but with repeated games in a population with the ability to hold a grudge, this strategy is out-competed by a “tit-for-tat with initial cooperation” strategy. Therefore, a person who is likely to cooperate with others trying to work toward the optimal group strategy, at least until betrayed, will outcompete someone who looks out only for himself. The tendency manifests as a general feeling of “the right way to do things” was the easiest evolutionary pathway toward achieving this tendency.
But why not have a sense of “pretend to cooperate until no one is looking, then do what’s best for yourself”? Well for one, because then you wouldn’t be righteously indignant and impassioned when you caught someone else following this strategy (important for the tit-for-tat part), but also because pretending involves lying. If the evolved strategy is to lie, then an expected co-evolution would be the ability to detect lies, a feedback loop would then result until a solution is developed so that a person can lie without realizing it himself so that he doesn’t give himself away. This is in fact what we find when people passionately defend their behavior that to everyone else is blatant hypocrisy: they are self-deceived and therefore don’t realize the inconsistency (this is also a large part of many of the fallacies discussed in the sequences).
Let’s test this against your (and some of the others posted here) example: murder. What we consider to be “murder” is usually undeserved killing, usually to benefit oneself.
Does this improve the outcome for an individual game? Yes, you get to take what he has.
But what about repeated games, where other players can hold a grudge? No, the other villagers will gang up on you when they see what you have done. And when the other villagers execute you as a group, this is “justice”, not “murder”. Why? Because it solves the cooperation problem by disincentivizing potential murderers. (Incidentally, this is why it’s so easy to come up with ethical dilemmas involving killing; because we pit two competing psychological solutions against each other: “don’t kill” vs. “justice”.)How else to test this? Go through the commands the Bible, and do your best to answer “would I feel this way if I hadn’t read this?” I predict that >90% of the ones for which you say “yes” can be shown to solve a cooperation problem found in the ancestral environment. (With lesser confidence, I predict that >50% of the ones for which you said “no” can be shown to have solved a cooperation problem found at the time of its writing.)
In retrospect, the alignment of psychology to the ancestral environment that the sequences demonstrated was one of the arguments which most strongly (downwardly) updated my belief in God. Why does the killing of a pre-pubescent seem so much worse than someone older? because the older person is a competitor, rather than a descendant/kin. Why does abortion seem so much worse the older the baby gets? because it is becoming increasingly viable. Why am I more emotionally motivated by the fate of those close to me than the fate of an entire neighboring city? because increased relatedness means more shared genes.
One final note: from a purely practical perspective, consider how much utility you are currently gaining from your beliefs. It may be too late to just choose to not pursue this to it’s conclusion (it was for me), but consider the possibility that if you’re wrong, knowing doesn’t actually improve your utility. It was world-shattering for me to change my mind on this, and I honestly don’t know what I would do if I had an “unknow” button.
Looking through the comments, it seems like most of my thoughts have been captured (economy of scale, collaboration producing non-linear accumulation, etc) But some of the others (risk management, the time axis of logistics) helped me come up with a new one: perishability. When we combine some of the other factors (especially risk) people will at times have a perishable surplus. At these times they would seek to convert this surplus to something non-perishable or some other thing that they need at the time. If we had a society as described above plus uniform starting conditions and everyone used the same dice-rolls (i.e. everyone had the same good/bad corn year), I believe this reason for trade would cease to exist
.
I agree with both, but claim that they are, in a sense, the same problem: if you solve the economy of scale issue, along with the parameters above, people would simply produce the amount desired with no diminishing marginal return problem on consumption.
Isn’t 2 just a product of 1? If 1 were not true, couldn’t you just get started at small scale? This may be understood, but if not, it seems useful to point out the entanglement.
Also, another aspect of the insurance is spoilage: some goods preserve better than others, so it makes sense to convert excess into something stable so that you can “self insure”.
Unfortunately, a car is an unavoidable cost for me, I expect that is a large part of the difference.
I do have a car, but I don’t even live in the bay area and didn’t realize how many of you were in Berkeley. Makes sense now.
I think I underestimated how much of the Rationalist community was in the bay area. That fact alone resolves most of my confusion, thank you.
No problem. Looks like that will be the soonest I’ll be able to make it as well.