Yeah it seems like a bunch of low hanging fruit was picked around that time, but that opened up a vista of new problems that are still out of reach. I wrote a post about this, which I don’t know if you’ve seen or not.
(This has been my experience with philosophical questions in general, that every seeming advance just opens up a vista of new harder problems. This is a major reason that I switched my attention to trying to ensure that AIs will be philosophically competent, instead of object-level philosophical questions.)
Thanks for the link. I believe I read it a while ago, but it is useful to reread it from my current perspective.
trying to ensure that AIs will be philosophically competent
I think such scenarios are plausible: I know some people argue that certain decision theory problems cannot be safely delegated to AI systems, but if we as humans can work on these problems safely, I expect that we could probably build systems that are about as safe (by crippling their ability to establish subjunctive dependence) but are also significantly more competent at philosophical progress than we are.
It seems like a significant amount of decision theory progress happened between 2006 and 2010, and since then progress has stalled.
Counterfactual mugging was invented independently by Gary Drescher in 2006, and by Vladimir Nesov in 2009.
Counterlogical mugging was invented by Vladimir Nesov in 2009.
The “agent simulates predictor” problem (now popularly known as the commitment races problem) was invented by Gary Drescher in 2010.
The “self-fulfilling spurious proofs” problem (now popularly known as the 5-and-10 problem) was invented by Benja Fallenstein in 2010.
Updatelessness was first proposed by Wei Dai in 2009.
Yeah it seems like a bunch of low hanging fruit was picked around that time, but that opened up a vista of new problems that are still out of reach. I wrote a post about this, which I don’t know if you’ve seen or not.
(This has been my experience with philosophical questions in general, that every seeming advance just opens up a vista of new harder problems. This is a major reason that I switched my attention to trying to ensure that AIs will be philosophically competent, instead of object-level philosophical questions.)
Thanks for the link. I believe I read it a while ago, but it is useful to reread it from my current perspective.
I think such scenarios are plausible: I know some people argue that certain decision theory problems cannot be safely delegated to AI systems, but if we as humans can work on these problems safely, I expect that we could probably build systems that are about as safe (by crippling their ability to establish subjunctive dependence) but are also significantly more competent at philosophical progress than we are.
I think I’ve been (slowly) making progress.
I think we would be able to make progress on this if people seriously wanted to make progress, but understandably it’s not the highest priority.