Your example of the janitor interrupting the scientist is a good demonstration of my point. I’ve organized over a hundred cybersecurity events featuring over a thousand speakers and I’ve never had a single janitor interrupt a talk. On the other hand, I’ve had numerous “experts” attempt to pass off fiction as fact, draw assumptions from faulty data, and generally behave far worse than any janitor might due to their inflated egos.
Based on my conversations with computer science and philosophy professors who aren’t EA-affiliated, and several who are, their posts are frequently down-voted simply because they represent opposite viewpoints.
Do the moderators of this forum do regular assessments to see how they can make improvements in the online culture so that there’s more diversity in perspective?
Fired from OpenAI’s Superalignment team, Aschenbrenner now runs a fund dedicated to funding AGI-focused startups, according to The Information.
“Former OpenAI super-alignment researcher Leopold Aschenbrenner, who was fired from the company for allegedly leaking information, has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collision and Stripe president John Collision, according to his personal website.
In a recent podcast interview, Aschenbrenner spoke about the new firm as a cross between a hedge fund and a think tank, focused largely on AGI, or artificial general intelligence. “There’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that,” he said. “Capital matters.”
“We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that’s wrong, the firm is not going to do that well,” he said.”
What happened to his concerns over safety, I wonder?