There’s also this podcast from just yesterday. It’s really good. Sam continues to say all the right things; in fact, I think this is the most reassuring he’s ever been on taking the societal risks seriously, if not necessarily the existential risks. Which leaves me baffled. He’s certainly a skilled enough social engineer to lie convincingly, but he sounds so dang sincere. I’m weakly concluding for the moment that he just doesn’t think the alignment problem is that hard. I think that’s wrong, but the matter is curiously murky, so it’s not necessarily an irrational opinion to hold. Getting more meaningful discussion between optimistic and pessimistic alignment experts would help close that gap.
He has a stance towards risk that is a necessary condition for becoming the CEO of a company like OpenAI, but doesn’t give you a high probability of building a safe ASI:
“Instead of downside risk [2], more investors should think about upside risk—not getting to invest in the company that will provide the return everyone is looking for.”
There’s a couple problems that even though Sam is surrounded by people like Ilya and Jan warning him of the consequences, he’s currently unwilling to change course.
Staring down AGI ruin is just fundamentally a deeply scary abyss. Doubly so when it would’ve been Sam himself at fault.
Like many political leaders, he’s unable to let go of power because he believes that he himself is most capable of wielding it, even when it causes them to take actions that are worse than either leader would do independently if they weren’t trying to wrest power away from the other.
More than most, Sam envisions the upside. He’s a vegetarian and cares a lot about animal suffering (“someday eating meat is going to get cancelled, and we are going to look back upon it with horror. we have a collective continual duty to find our way to better morals”), seeing the power of AGI to stop that suffering. He talks up the curing cancer and objectively good tech progress a lot. He probably sees it as: heads, I live forever and am the greatest person for bringing utopia, tails, I’m dead anyway like I would be in 50 years.
Ultimately, I think Sama sees the coin flip (but believing utopia AGI is higher chance) and being the ever-optimist, is willing to flip it for the good things because it’s too scary to believe that the abyss side would happen.
There’s also this podcast from just yesterday. It’s really good. Sam continues to say all the right things; in fact, I think this is the most reassuring he’s ever been on taking the societal risks seriously, if not necessarily the existential risks. Which leaves me baffled. He’s certainly a skilled enough social engineer to lie convincingly, but he sounds so dang sincere. I’m weakly concluding for the moment that he just doesn’t think the alignment problem is that hard. I think that’s wrong, but the matter is curiously murky, so it’s not necessarily an irrational opinion to hold. Getting more meaningful discussion between optimistic and pessimistic alignment experts would help close that gap.
He has a stance towards risk that is a necessary condition for becoming the CEO of a company like OpenAI, but doesn’t give you a high probability of building a safe ASI:
https://blog.samaltman.com/what-i-wish-someone-had-told-me
“Inaction is a particularly insidious type of risk.”
https://blog.samaltman.com/how-to-be-successful
“Most people overestimate risk and underestimate reward.”
https://blog.samaltman.com/upside-risk
“Instead of downside risk [2], more investors should think about upside risk—not getting to invest in the company that will provide the return everyone is looking for.”
There’s a couple problems that even though Sam is surrounded by people like Ilya and Jan warning him of the consequences, he’s currently unwilling to change course.
Staring down AGI ruin is just fundamentally a deeply scary abyss. Doubly so when it would’ve been Sam himself at fault.
Like many political leaders, he’s unable to let go of power because he believes that he himself is most capable of wielding it, even when it causes them to take actions that are worse than either leader would do independently if they weren’t trying to wrest power away from the other.
More than most, Sam envisions the upside. He’s a vegetarian and cares a lot about animal suffering (“someday eating meat is going to get cancelled, and we are going to look back upon it with horror. we have a collective continual duty to find our way to better morals”), seeing the power of AGI to stop that suffering. He talks up the curing cancer and objectively good tech progress a lot. He probably sees it as: heads, I live forever and am the greatest person for bringing utopia, tails, I’m dead anyway like I would be in 50 years.
Ultimately, I think Sama sees the coin flip (but believing utopia AGI is higher chance) and being the ever-optimist, is willing to flip it for the good things because it’s too scary to believe that the abyss side would happen.