You criticize Altman for pushing ahead with dangerous AI tech, but then most of what you’d spend the money on is pushing ahead with tech that isn’t directly dangerous. Sure, that’s better. But it doesn’t solve the issue that we’re headed into an out-of-control future. Where’s the part where we use money to improve the degree to which thoughtful high-integrity people (or prosocial AI successor agents with those traits) are able to steer where this is all going? (Not saying there are easy answers.)
“This is about how to spend money on AI safety” isn’t the point of the opening of the post. It’s more:
Here’s some stuff I’d talk about anyway (like thinking of the economy in terms of energy flows) and a convenient way to frame it that was in the news!
Wow, we could also spend money on not maximally accelerating AI!
I see a lot of people saying that AI is urgently needed to solve [problem] like global warming, but here is how you solve some problems by solving the problems.
AI is something I’ve thought about a lot, but I think I’ve already posted everything about that that I want to, and people didn’t seem to appreciate this that much.
AI is something I’ve thought about a lot, but I think I’ve already posted everything about that that I want to, and people didn’t seem to appreciate this that much.
Thanks for linking it! I think one reason I bounced off this article the first time was that I had pattern matched it from the title with the abundant posts on this platform that mostly distill existing arguments.
You criticize Altman for pushing ahead with dangerous AI tech, but then most of what you’d spend the money on is pushing ahead with tech that isn’t directly dangerous. Sure, that’s better. But it doesn’t solve the issue that we’re headed into an out-of-control future. Where’s the part where we use money to improve the degree to which thoughtful high-integrity people (or prosocial AI successor agents with those traits) are able to steer where this is all going?
(Not saying there are easy answers.)
“This is about how to spend money on AI safety” isn’t the point of the opening of the post. It’s more:
Here’s some stuff I’d talk about anyway (like thinking of the economy in terms of energy flows) and a convenient way to frame it that was in the news!
Wow, we could also spend money on not maximally accelerating AI!
I see a lot of people saying that AI is urgently needed to solve [problem] like global warming, but here is how you solve some problems by solving the problems.
AI is something I’ve thought about a lot, but I think I’ve already posted everything about that that I want to, and people didn’t seem to appreciate this that much.
Thanks for linking it! I think one reason I bounced off this article the first time was that I had pattern matched it from the title with the abundant posts on this platform that mostly distill existing arguments.