Right now it seems to me that one of the highest impact things not likely to be done by default is substantially increased funding for AI safety.
And another interesting one from the summit:
“There was almost no discussion around agents—all gen AI & model scaling concerns.
It’s perhaps because agent capabilities are mediocre today and thus hard to imagine, similar to how regulators couldn’t imagine GPT-3’s implications until ChatGPT.”—https://x.com/kanjun/status/1720502618169208994?s=46&t=D5sNUZS8uOg4FTcneuxVIg
Right now it seems to me that one of the highest impact things not likely to be done by default is substantially increased funding for AI safety.
And another interesting one from the summit:
“There was almost no discussion around agents—all gen AI & model scaling concerns.
It’s perhaps because agent capabilities are mediocre today and thus hard to imagine, similar to how regulators couldn’t imagine GPT-3’s implications until ChatGPT.”—https://x.com/kanjun/status/1720502618169208994?s=46&t=D5sNUZS8uOg4FTcneuxVIg