The integral of 1/t is log(t), thus hyperbolically discounted constant amount of torture over infinite time is still infinite. Hence your intuition about the Faustian bargain is consistent with hyperbolic discounting (but not with exponential discounting).
Larifari
Going from 100 to 150,000 is not 17 doublings, but log(150000/100)/log(2), about 10.5
I live in Berlin and would be potentially willing to travel to Munich (on not-to-short notice). However, during July and August I’m going to be in Spain and unable to come to Munich.
I wonder what is the underlying reason why intuition works better than deliberate reasoning in some cases. Maybe its because in those cases considered judgments rely themselves on other intuitions that are less accurate, for example about the importance of certain evidence. In the basketball case, I can imagine that experts who are asked to deliberate carefully take into account information about injured players, or recent “trends”, that turn out to be actually much less important than their intuition tells them.
I am participating in a just-for-fun soccer prediction game with friends, where one gets awarded points for correctly predicted results or tendencies. In the beginning, I relied on gut feeling and information from soccer news sites like everyone else, and performed pretty badly. After a while, I implemented a simple script that takes quotes from betfair.com, a prediction market for sports, and calculates the prediction with highest expected points. Since then, I steadily climb the ladder.
I’m going to be in Barcelona from July through September and would be happy to participate in a meetup during that time.
5 million
Even a FOOM seed only a few hundred bytes would not necessarily have been produced by evolution—there are 2^800 different possibilities for a 100-byte snippet. Only if there are intermediate steps in increasing complexity and fitness, evolution can find a solution in such a large search space. If the shortest possible seed is completely isolated in the search space, there is no way it can be found, neither by evolution nor by deliberate optimization.
I think something like question 9 is very helpful. Even more important than having an a priori success criterion, instead of defining it a posteriori to fit whatever one has accomplished (though in science sometimes redefining success criteria does make sense, because one does never know exactly in advance what can be accomplished), is the mental exercise of coming up with such a criterion. If one isn’t even able to do this, that’s a very bad sign for the project.
What exactly are we trying to learn from this thought experiment that we cannot already learn from the torture/dust-speck experiment?
If the AI is designed to follow the principle by the letter, it has to request approval from the designer even for the action of requesting approval, leaving the AI incapable of action. If the AI is designed to be able to make certain exemptions, it will figure out a way to modify the designer without needing approval for this modification.
Do we actually know that our discounting function is hyperbolic in the range below 5 minutes? Or is that just extrapolation from experiments done on longer intervals?
Referencing long-term consequences could also be viewed as having empathy with ones future self. Instead of thinking “What do I care about the me of tomorrow?”, one creates the impression/illusion of a continuous personality. Maybe empathy for others even evolved piggybacking on empathy for future versions of oneself.
I’ll be there!