He would probably say that he doesn’t care (he works for others, not for himself) and that alchool doesn’t affect him, since people already kind of noted this and the answers were these. But tbh, this whole thing is not that interesting to me, and I would classify it as weak evidence for what he belives or not. Usually it is mainly gossip.
emanuele ascani
Interview with Aubrey de Grey, chief science officer of the SENS Research Foundation
Wow, ok, thank you. This is useful information. I didn’t take your ADHD/ADD hypothesis seriously to be honest, but now that you specify the nature of the test to diagnose it, it makes much more sense. I will research more and get tested.
No, my experience is the gameplays I have seen. From what I’ve seen it seems very easy to communicate (via voice chat) and interact with environments, which are also very customizable. I don’t know anything beyond this.
Thank you, I think I will try to pay attention if some “flickering” happens. It is a possibility.
[Question] Why is my (our?) reasoning process noisy?
It’s uncanny how sometimes we all arrive at the same conclusions privately
Why SENS makes sense
Regarding “If a survey is performed, most people in the United States will say that curing aging is undesirable. 85%”. One similar survey has already been done. The result depends if you specify that an unlimited lifespan would be in health and not in increasing frailty. If you do, > 40% of respondents opt for unlimited lifespan, otherwise 1%. https://www.frontiersin.org/articles/10.3389/fgene.2015.00353/full
Would it be possible and cost-effective to release video courses at a much lower cost?
I know this conversation is very old and Holden has matured his outlook on the subject (see Open Philanthropy’s grants to aging research, and Open Philanthropy’s analysis of aging research, although still dismissive of SENS), but I still want to point out what I think were the mistakes he made here.
Holden didn’t seem to get how different in scope the SENS’ plan is from the kind of research that a single brilliant researcher can bring forward in the traditional way. SENS needs a plethora of different therapies that would require an entire NIA for themselves to be developed… and this would be enough only for the first phases of research and not for clinical trials. I don’t get how he could be confused about this. Quoting Holden:
You [Aubrey] state that you have a high-expected-value plan that the academic world can’t recognize the value of because of shortcomings such as “balkanisation” and risk aversion. I believe it may be true that the academic world has such problems to a degree; however, I also believe that there are a lot of extremely talented people in academia and that they often (though not necessarily always) find ways to move forward on promising work.
Also, I’m confused about why Holden put so much weight on Dario Amodei’s opinion over Aubrey’s. Dario is an AI researcher.
[...] And as my summary of our conversation shows, he [Dario] acknowledges that the world of biomedical research may have certain suboptimal incentives, but didn’t seem to think that these issues are leaving specific, visible outstanding research programs on the table the way that your email implies. [...]
Thankfully, the Open Phil Holden obviously doesn’t think this is the case.
About life extension see SENS Research Foundation as an example of specific org very focused on the moonshot, if you don’t already know it.
How to evaluate neglectedness and tractability of aging research
Submission (for low bandwidth Oracle)
Any question such that a correct answer to it should very clearly benefit both humanity and the Oracle. Even if the Oracle has preferences we can’t completely guess, we can probably still say that such questions could be about the survival of both humanity and the Oracle, or about the survival of only the Oracle or its values. This because even if we don’t know exactly what the Oracle is optimising for, we can guess that it will not want to destroy itself, given the vast majority of its possible preferences. So it will give humanity more power to protect both, or only the Oracle.
Example 1: let’s say we discover the location of an alien civilisation, and we want to minimise the chances of it destroying our planet. Then we must decide what actions to take. Let’s say the Oracle can only answer “yes” or “no”. Then we can submit questions such as if we should take a particular action or not. This kind of situation I suspect falls within a more general case of “use Oracle to avoid threat to entire planet, Oracle included” inside which questions should be safe.
Example 2: Let’s say we want to minimise the chance that the Oracle breaks down due to accidents. We can ask him what is the best course of action to take given a set of ideas we come up with. In this case we should make sure beforehand that nothing in the list makes the Oracle impossible or too difficult to shut down by humans.
Example 3: Let’s say we become practically sure that the Oracle is aligned with us. Then we could ask it to choose the best course of action to take among a list of strategies devised to make sure he doesn’t become misaligned. In this case the answer benefits both us and the Oracle, because the Oracle should have incentives not to change values itself. I think this is more sketchy and possibly dangerous, because of the premise: the Oracle could obviously pretend to be aligned. But given the premise it should be a good question, although I don’t know how useful it is as a submission under this post (maybe it’s too obvious or too unrealistic given the premise).
Impact of aging research besides LEV
The definition of LEV I used in the previous post is: “Longevity Escape Velocity (LEV) is the minimum rate of medical progress such that individual life expectancy is raised by at least one year per year if medical interventions are used”. So it doesn’t lead to an unbounded life expectancy. In fact, with a simplified calculation, in the first post I calculated life expectancy after LEV to be approximately 1000 years. 1000 years is what comes up using the same idea as your hydra example (risk of death flat at the risk of death of a young person), but in reality it should be slightly less, because in the calculation I left out the part when risk of death starts falling just after hitting LEV. We are not dealing with infinite utilities.
The main measure of impact I gave in the post comes from these three values and some corrections:
1000 QALYs: life expectancy of a person after hitting LEV
36,500,000 deaths/year due to aging
Expected number of years LEV is made closer by (by a given project examined)
Absolutely no one had thought of that in the YouTube comment section under his interview with JRE