I’m really enjoying this series of posts. Perhaps this will be addressed in 4-6, but I’m wondering about prescriptions which might follow from c-risks.
I have a sense that humanity has built a tower of knowledge and capital and capacities, and just above the tower is a rope upwards labeled “AGI”. We don’t know exactly where it leads, aside from upwards.
But the tower we’ve built is also dissolving along multiple dimensions. Civilizations require many sorts of capital and investment, and in some ways we’ve been “eating the seed corn.” Decline is a real possibility, and if we leap for the AGI rope and miss, we might fall fairly hard and have to rebuild for a while.
There might be silver linings to a fall, as long as it’s not too hard and we get a second chance at things. Maybe the second attempt at the AGI rope could be more ‘sane’ in certain ways. My models aren’t good enough here to know what scenario to root for.
At any rate, today it seems like we’re in something like Robin Hanson’s “Dreamtime”, an era of ridiculous surplus, inefficiency, and delusion. Dreamtimes are finite; they end. I think either AGI or civilizational collapse will end our Dreamtime.
What’s worth doing in this Dreamtime, before both surpluses and illusions vanish? My sense is:
If we might reach AGI during this cycle, I think it would be good to make a serious attempt at understanding consciousness. (An open note here that I stepped down from the board of QRI and ended all affiliation with the institution. If you want to collaborate please reach out directly.)
If we’re headed for collapse instead of AGI, it seems wise to use Dreamtime resources to invest in forms of capital that will persist after a collapse and be useful for rebuilding a benevolent civilization.
Investing in solving the dysfunctions of our time and preventing a hard collapse also seems hugely worthwhile, if it’s tractable!
Thanks for the comment! I look into some of the philosophy required in part 4, x-risk relationships in part 5 and todos in part 6. Understanding consciousness is an important sub-component and might be needed sooner than we think. I think an important piece is understanding what modifications to consciousness are harmful or beneficial. This would have a sub-problem of what chemicals or organisms alter it and in what ways as well as what ideas and experiences seem to have a lasting effect on people. It’s possible this is more doable than understanding consciousness as a whole, but it certainly touches a lot of the core problem.
I’m really enjoying this series of posts. Perhaps this will be addressed in 4-6, but I’m wondering about prescriptions which might follow from c-risks.
I have a sense that humanity has built a tower of knowledge and capital and capacities, and just above the tower is a rope upwards labeled “AGI”. We don’t know exactly where it leads, aside from upwards.
But the tower we’ve built is also dissolving along multiple dimensions. Civilizations require many sorts of capital and investment, and in some ways we’ve been “eating the seed corn.” Decline is a real possibility, and if we leap for the AGI rope and miss, we might fall fairly hard and have to rebuild for a while.
There might be silver linings to a fall, as long as it’s not too hard and we get a second chance at things. Maybe the second attempt at the AGI rope could be more ‘sane’ in certain ways. My models aren’t good enough here to know what scenario to root for.
At any rate, today it seems like we’re in something like Robin Hanson’s “Dreamtime”, an era of ridiculous surplus, inefficiency, and delusion. Dreamtimes are finite; they end. I think either AGI or civilizational collapse will end our Dreamtime.
What’s worth doing in this Dreamtime, before both surpluses and illusions vanish? My sense is:
If we might reach AGI during this cycle, I think it would be good to make a serious attempt at understanding consciousness. (An open note here that I stepped down from the board of QRI and ended all affiliation with the institution. If you want to collaborate please reach out directly.)
If we’re headed for collapse instead of AGI, it seems wise to use Dreamtime resources to invest in forms of capital that will persist after a collapse and be useful for rebuilding a benevolent civilization.
Investing in solving the dysfunctions of our time and preventing a hard collapse also seems hugely worthwhile, if it’s tractable!
Looking forward to your conclusions.
Thanks for the comment! I look into some of the philosophy required in part 4, x-risk relationships in part 5 and todos in part 6. Understanding consciousness is an important sub-component and might be needed sooner than we think. I think an important piece is understanding what modifications to consciousness are harmful or beneficial. This would have a sub-problem of what chemicals or organisms alter it and in what ways as well as what ideas and experiences seem to have a lasting effect on people. It’s possible this is more doable than understanding consciousness as a whole, but it certainly touches a lot of the core problem.