My understanding is that dath ilan hasn’t solved the alignment problem but has solved the problem of getting people not to try to build AIs powerful enough to be problematic if unaligned. Apparently mostly by keeping the idea of powerful AIs secret from most of the population, which seems absurd to me. I am not sure why Eliezer thinks dath ilan could do that more effectively than it could have them know that it’s possible and dangerous and then Do Something That Is Not That; both are obviously impractical on earth but I think the claimed superior abilities of dath ilan help much more with the latter than with the former.
My understandings is that dath ilan has the cilivizational cooperation to not build AGI or increase its civilizational technological capacities until the alignment problem is solved, which means it can take as long as it needs to solve the alignment problem. No “race against time” issue. I think it’s fairly obvious there is a secret goverment project working on AI alignment.
It has been explicitly written into one of the dath ilan isekai stories that there is indeed a project in dath ilan to both solve AI alignment and to suppress further computer technology progress until after alignment is fully solved.
Edit: More than that: it is essentially the central project that defines much of the course of civilization.
My inner simulation of Yudkowsky says, roughly:
Alignment is Very Hard and will not be solved in our world, so we must Die With Dignity.
Alignment probably would end up being solved in Dath Ilan (Yudkowsky has claimed this is >97% likely).
We don’t live in Dath Ilan, boo hoo.
So maybe the easiest way to solve alignment is to fix civilizational inadequacy first, and then let our adequate civilization figure out the rest.
And maybe the easiest way to solve civilizational inadequacy is by legalizing prediction markets.
My understanding is that dath ilan hasn’t solved the alignment problem but has solved the problem of getting people not to try to build AIs powerful enough to be problematic if unaligned. Apparently mostly by keeping the idea of powerful AIs secret from most of the population, which seems absurd to me. I am not sure why Eliezer thinks dath ilan could do that more effectively than it could have them know that it’s possible and dangerous and then Do Something That Is Not That; both are obviously impractical on earth but I think the claimed superior abilities of dath ilan help much more with the latter than with the former.
Prediction markets would lead to good AI timeline forecasts.
It is easy to bet on p(doom), as long as doing so is legally supported. If Alice believes that the world will end in five years, and Bob does not:
Alice and Bob sign a contract.
Bob gives Alice $50k.
If the world is still around in five years, Alice must give Bob $200k.
Therefore, prediction markets will forecast the apocalypse.
Decision markets will evaluate {legislation/executive orders/candidates} with respect to their effects on the apocalypse-forecasts.
Therefore, prediction markets will enable coordination against building AGI.
My understandings is that dath ilan has the cilivizational cooperation to not build AGI or increase its civilizational technological capacities until the alignment problem is solved, which means it can take as long as it needs to solve the alignment problem. No “race against time” issue. I think it’s fairly obvious there is a secret goverment project working on AI alignment.
It has been explicitly written into one of the dath ilan isekai stories that there is indeed a project in dath ilan to both solve AI alignment and to suppress further computer technology progress until after alignment is fully solved.
Edit: More than that: it is essentially the central project that defines much of the course of civilization.
This is not obvious to me, although I wish it were the case. How do you think you know this?
Sorry, I meant in that is the case in dath ilan.
I think the number of dath ilani who think of the concept of AGI is plausibly pretty substantial, but they just go talk to a Keeper about it.