I care about AI safety because I believe that preserving humanities potential is one of the most important things to work on. The Precipice lays out a future of humanity that I find very inspiring.
Preserving humanities potential seems so important to me because it will enable us to build the institutions and technology that will allow us to understand the nature of the universe. What is the truth? Is there something such as the truth? Is there some destiny that we ought to realise?
As uncontrollable AI seems like the biggest reason why we (humanity) won’t be able figure these questions out, a big part of my mission in life is to make sure that AI does not kill everyone.
This also made me become worried that we don’t have enough time to solve the alignment problem. That we need to pause to buy us enough time.
Today I encountered a crux that might change my entire perspective, and I feel a bit intimidated my it.
Crux: What if more intelligent species are simply better at answering the questions as above?
Just because we can feel pleasant and bad emotions, and we have a consciousness, does that make us more valuable? Are these the things that should be prioritised over truth seeking? I genuinely don’t know.
People that support a pause posit that this is our only hope to survive. And I think that is likely true. But what is if the ultimate goal is not the survival of the human species, but to understand the truth of the universe, and to act on it?
What if humans are not in the best position to actually do that? What is if a super intelligence can realise this goal in the most effective way?
Pausing for what?
I care about AI safety because I believe that preserving humanities potential is one of the most important things to work on. The Precipice lays out a future of humanity that I find very inspiring.
Preserving humanities potential seems so important to me because it will enable us to build the institutions and technology that will allow us to understand the nature of the universe. What is the truth? Is there something such as the truth? Is there some destiny that we ought to realise?
As uncontrollable AI seems like the biggest reason why we (humanity) won’t be able figure these questions out, a big part of my mission in life is to make sure that AI does not kill everyone.
This also made me become worried that we don’t have enough time to solve the alignment problem. That we need to pause to buy us enough time.
Today I encountered a crux that might change my entire perspective, and I feel a bit intimidated my it.
Crux: What if more intelligent species are simply better at answering the questions as above?
Just because we can feel pleasant and bad emotions, and we have a consciousness, does that make us more valuable? Are these the things that should be prioritised over truth seeking? I genuinely don’t know.
People that support a pause posit that this is our only hope to survive. And I think that is likely true. But what is if the ultimate goal is not the survival of the human species, but to understand the truth of the universe, and to act on it?
What if humans are not in the best position to actually do that? What is if a super intelligence can realise this goal in the most effective way?