For what it’s worth, I get the sense that the Oxford EA research community is pretty optimistic about the future, but generally seem to believe the risks are just more pragmatic to pay attention to.
Anders Sandberg is doing work on the potential of humans (or related entities) expanding through the universe. The phrase “Cosmic Endowment” is said every here and there. Stuart Armstrong recently created a calendar of the year 12020.
I personally have a very hard time imagining exactly what things will be like post-AGI or what we could come up with now that would make them better, conditional on it going well. It seems like future research could figure a lot of those details out. But I’m in some ways incredibly optimistic about the future. This model gives a very positive result, though also a not very specific one.
I think my personal view is something like,
“Things seem super high-EV in expectation. In many ways, we as a species seem to be in a highly opportunistic setting. Let’s generally try to be as careful as possible to make sure we don’t mess up.”
Note that high-EV does not mean high-probability. It could be that we have a 0.1% chance of surviving, as a species, but if we do, there would be many orders of magnitude net benefit. I use this not because I believe we have a 0.1% chance, but rather because I think it’s a pretty reasonable lower bound.
For what it’s worth, I get the sense that the Oxford EA research community is pretty optimistic about the future, but generally seem to believe the risks are just more pragmatic to pay attention to.
Anders Sandberg is doing work on the potential of humans (or related entities) expanding through the universe. The phrase “Cosmic Endowment” is said every here and there. Stuart Armstrong recently created a calendar of the year 12020.
I personally have a very hard time imagining exactly what things will be like post-AGI or what we could come up with now that would make them better, conditional on it going well. It seems like future research could figure a lot of those details out. But I’m in some ways incredibly optimistic about the future. This model gives a very positive result, though also a not very specific one.
I think my personal view is something like, “Things seem super high-EV in expectation. In many ways, we as a species seem to be in a highly opportunistic setting. Let’s generally try to be as careful as possible to make sure we don’t mess up.”
Note that high-EV does not mean high-probability. It could be that we have a 0.1% chance of surviving, as a species, but if we do, there would be many orders of magnitude net benefit. I use this not because I believe we have a 0.1% chance, but rather because I think it’s a pretty reasonable lower bound.